SlideShare a Scribd company logo
1 of 44
14 Evaluation, Effectiveness, and Offender
RecidivismLEARNING OBJECTIVES
After reading this chapter, you will be able to:
· 1. Discuss the importance of evaluative research and the role
of the independent evaluator.
· 2. Explain the importance of quantitative processes in
determining if treatment programs are evidence based in their
practice.
· 3. Explain how validity and reliability are important to the
evaluation process.
· 4. Identify some standardized instruments and explain why
they are beneficial to the evaluation of treatment programs.
· 5. Explain how evaluations of drug treatment programs and
sex offender treatment programs might be conducted.
· 6. Identify some of the ethical considerations when conducting
evaluative research in mental health settings.
· 7. Discuss how evaluation results can be used to improve
treatment program processes and outcomes.PART ONE:
INTRODUCTION TO THE EVALUATION PROCESS
When examining any program, whether therapeutic or
otherwise, one of the first questions asked by politicians, policy
makers, program administrators, and government officials is,
“Does the treatment program work?” In such a case, the
underlying desire is to know if money spent on a program is
money that is well spent. In such cases, treatment providers will
often be required to provide some sort of empirical “evidence”
that the program is effective. This is often referred to as
evidence-based program delivery. Treatment providers are
increasingly being asked to demonstrate the effectiveness of
their programs, particularly when such programs are grant
funded. In turn, many correctional treatment programs seek
money from grant-generating agencies, and, when they have
some sort of documented program success, they increase their
odds of securing such funds.
However, before going further, we would like to make one
observation regarding correctional counseling and research. We
believe that programs are best evaluated by researchers who
themselves are treatment providers. This is particularly true if
the researcher has had specific experience with the type of
population that is the subject of the program evaluation. Both
authors have conducted grant-funded evaluation research of
treatment programs and have also studied and/or worked in a
variety of treatment fields. One author in particular has worked
with most typologies of offenders who have been presented in
this text and has also conducted numerous evaluative studies of
treatment programs that provide services to those offenders. We
believe that this is important because such a practitioner is able
to make sense of data that may seem confusing, uncertain, or
contradictory, simply because they understand how the program
and/or process of treatment intervention works within a given
agency and/or with a specific offender population. With this
said, it is at this point that we now turn our attention to the
notion of evaluation research.Evaluation Research
For the purposes of this text, we will refer to the Center for
Program Evaluation and Performance Management which is a
clearinghouse on evaluative research offered through the Bureau
of Justice Assistance (BJA). This source, available online and
referenced in this text, provides the reader with a very good
overview of the evaluation process and also provides a number
of examples pertaining to the evaluation of criminal justice and
treatment programs. Because this is a federal government
website, the information therein is public domain. In addition,
we believe that this site provides a very clear, succinct, and
effective overview of evaluation research from the eyes of the
practitioner. It is for these reasons that this chapter is
constructed from much of the organization and structure of the
BJA website, providing the basics of evaluation research along
with our own insights as to how that information is useful to
correctional counselors.
Evaluation is a systematic and objective method for testing the
success (or failure) of a given program. The primary purpose of
conducting evaluative research is to determine if the
intervention program is achieving its stated goals and
objectives. In the field of correctional counseling, this is
actually very important. It is the observation of the first author
of this text that, in many cases, treatment programs provide
their services but are not truly aware of whether they have
actually “fixed” their clients; this is an important point to
address. Treatment agencies must be able and willing to
demonstrate the effectiveness of their program’s intervention
and this effectiveness should be expressed in quantitative terms.
A failure to do so consists of negligence on the part of the
agency and also leads to a potential public safety problem.
Indeed, if the program does not truly work to reform offenders
but the treatment staff continue to operate as if it does,
offenders who are risks to public safety will just continue to
enter society unchanged and just as dangerous or problematic as
before.
Often, counselors and other personnel primarily geared toward
offering therapeutic services do not necessarily understand the
purpose of evaluative research. In addition, it is not uncommon
for such practitioners to also discount the contributions of an
evaluator, claiming that the evaluator cannot possibly know
(better than themselves) whether clients are “getting better,” so
to speak. However, this is often based on intuition on the part of
the therapist and is also not grounded in objective and detached
observation. Evaluative research seeks to look at the process
and outcome of correctional counseling in an objective and
detached measure to determine the objective truth as to the
efficacy of a given program.
All too often, treatment staff may provide anecdotal evidence
and/or selected cases of success. This should be avoided as this
is not sufficient to demonstrate effectiveness and as too much is
left to interpretation. Rather, it is important that evaluations of
therapeutic programs be conducted by persons who are neutral
and detached from the delivery of therapeutic services and it is
also important that quantitative as well as qualitative measures
be included in that evaluation. Qualitative measures are those
that are not numerical in nature and are based more on the
context and circumstances of the observation. For instance,
clinical case notes, open-ended interviews, and therapist
observations would be examples of qualitative observations. On
the other hand, quantitative measures are those that have a
numerical quantity attached to them. Quantitative measures are
those derived from standardized instruments that provide a
numerical value to the information gathered from a
client.Working with an Outside Evaluator
One of the first issues that agencies will need to consider is
whether to use an evaluation expert and whether that person can
be from within the agency or whether they should instead come
outside of the agency being evaluated. If the agency has funding
available, it is recommended that they find a trained and
experienced evaluator; such a person can be of great assistance
to the treatment program throughout the evaluation process.
However, it should be noted that agencies and agency staff must
be receptive to the efforts of the evaluator. In many cases,
agency staff may be defensive and/or guarded when providing
information or records. In such cases, it is imperative that
agency leadership ensure that hindrances to data collection and
the communication of client outcomes be sufficiently addressed.
Regardless of whether the evaluator is from within or outside
the agency, it is important that a trained and qualified evaluator
be identified and secured. A failure to achieve this basic
ingredient of the evaluation process will mean that counselors,
clinicians, and perhaps clients, will “feel” as if the treatment
regimen is working but they will not be able to provide any type
of evidence-based support for their opinions. Obviously, this is
not scientifically sound nor is it convincing to any potential
skeptic who might examine the agency. Lastly, a qualified
evaluator should have experience in evaluating treatment
programs and, ideally, should have experience in evaluating
treatment programs similar to the one operated by the agency in
question. The evaluator should also attempt to balance the needs
and concerns of various decision makers with the need for
objectivity while conducting the evaluation.
Once it has been determined that the agency is ready for
evaluation and who the evaluator will be, the process of
developing an evaluation plan begins. Basically, an evaluation
plan describes the process that will be used to conduct an
evaluation of the treatment program (Bureau of Justice
Assistance, 2008). According to the BJA (2008), key elements
of an evaluation plan that should be addressed are (1)
determining the target audience for the evaluation and the
dissemination of its results; (2) identifying the evaluation
questions that should be asked; (3) determining how the
evaluation design will be developed; (4) deciding the type of
data to be collected, how that data will be collected, and by
whom; and (5) articulating the final products of the report that
will be produced.
Lastly, the evaluation plan should detail the roles of various
individuals who will contribute to the evaluation process; these
individuals include the evaluator, the agency management,
treatment staff, clients, family members of clients, and any
other persons impacted by the research.
Likewise, an ideal evaluator will have had experience in
delivery of therapeutic services that are the same or similar to
those provided by the agency. This is important because it
provides the evaluator with additional insight behind the data
that is generated. Such insight can lead to a particularly useful
blend of observations that dwell betwixt the world of the
clinical practitioner and the academic researcher; this is the
strongest and most useful type of evaluative research that can
be produced.Quantitative Evaluation of a Drug Treatment
Program
An example of an evaluation plan that uses both quantitative
and qualitative aspects of measurement is provided in the
following evaluation description. This information consists of
an evaluation model that the first author designed while
working as an evaluator at a local drug treatment facility. This
evaluation design demonstrates how the treatment staff and the
evaluator may both provide observations, but it is the use of
standardized instruments and collection methods that serve as
the primary data used to determine client progress. (The use of
standardized tools will be discussed later in this chapter.)
Further, this example demonstrates that measures, to be
effective, must be taken over a long period of time and among
many different sources (i.e., agency staff, the evaluator, and/or
family and friends of the client). It is in this manner that a
composite profile of the client’s overall progress is developed.
A. Evaluative Methods. This research design will follow a
simple time-series design with repeated measures over the
period of the grant-funded period. It is expected that the
evaluative design will allow the agency to address all related
program outcome questions as well as process questions, as
required by this grant-funding opportunity. During the grant-
funded period, weekly staff observations will be conducted to
track client progress through the use of an evaluative rubric that
is based on the basic tenets of operant conditioning strategies.
When observing client progress, staff will ensure that their
noted input is structured in such a manner as to optimize
measurability while including contextual, subjective, and
qualitative data that is deemed clinically useful or relevant.
Further, staff will be required to provide a list of intervention
techniques and behavior management tools that utilize each of
the four categories.
B. Data Collection Instruments. In addition, several pretest and
post-test measures will be taken to assess both the subject’s
recovery from alcohol or drug abuse and to assess their
improvement in their other co-occurring mental health
diagnoses. In addition to quantitative assessments of both of
these areas of client outcome, semistructured qualitative client
observations will be conducted by various staff at the pretest
and post-test stages. One of these forms of interview is known
as the Addiction Severity Index (ASI) and is commonly used in
treatment facilities all over the United States. This will serve as
an initial data collection process on clients and it is expected
that this data will be more useful to treatment staff than to those
having research objectives.
Four other measurement scales will be utilized at intake and at
discharge (three months) of the first phase of treatment. These
scales are as follows: The Drug Abuse Screening
Test (Skinner, 1995), which is a widely recognized scale
providing a quantitative index of the degree of problems related
to drug and/or alcohol dependency. The Substance Abuse Subtle
Screening Instrument (SASSI) is a screening measure that
provides interpretations of client profiles and aids in developing
hypotheses that clinicians or researchers may find useful in
understanding persons in treatment. The Behaviors, Attitudes,
Drinking, & Driving Scale (BADDS) will be administered at
intake, program completion, and the three-month follow-up
period. The BAADS is an evidence-based pre- and post-test
psychological questionnaire that measures attitudes, behaviors,
and intervention effectiveness related to impaired driving.
Optionally, the Maryland Addictions Questionnaire(Western
Psychological Services) may be given at intake. This scale
determines severity of addiction; the motivation of the client;
the risk of relapse; and treatment complications related to
cognitive difficulties, anxiety, or depression. When and where
feasible, these scales will likewise be utilized with clients at the
6-month, 9-month, and 12-month periods for subjects in
treatment.
In addition, weekly observations will be conducted by staff and
these observations will be provided in weekly case notes. Staff
at the facility will specifically focus on observable and
behavioral elements of the client’s progress as this is
considered a better method of judging the client’s progress than
are deductions that are made from the client’s self-proclaimed
introspective work. The staff at the facility are already
accustomed to this approach of case review and will simply
restrict their observations (particularly those placed in writing)
to that which is observed through overt client behavior without
any inference being drawn beyond what is clearly observable
and thus measurable. This should not be a problem since the
state of Louisiana already encourages this type of reference
when compiling case notes and client progress evaluations.
Further, the Substance Abuse Relapse
Assessment (Psychological Assessment Resources) will be
administered to subjects at the 3-, 6-, 9-, and 12-month periods.
This instrument is a structured interview developed for use by
substance abuse treatment professionals to help recovering
individuals recognize signs of relapse (Psychological
Assessment Resources). Likewise, staff will conduct follow-up
interviews during this period of time to provide an overall GAF
scale rating for prior clients during the 3-, 6-, 9-, and 12-month
period of the study. This will provide an additional metric (ratio
data) measure during the aftercare stages of treatment. Staff
will also be asked to rank the degree of success (on a scale from
1 to 100) that clients have made in reaching their original goals
that were self-contracted in their plan of change. Staff will rank
client success in goal achievement during the 4th, 7th, and 13th
months of the study.
Upon completion of phase one, measures will also be taken at
the close of the 4th, 7th, and 13th months through an informal
survey of friends and family to determine if the subject is
engaging in self-management strategies that were taught during
phase one. These individuals will also be asked to rank the
degree of success (on a scale from 1 to 100) that clients have
made in reaching their original goals that were self-contracted
in their plan of change. The information from these surveys will
be triangulated with the information obtained from staff using
the GAF checklist to provide a multidimensional view of the
subject’s progress. Further, subjects themselves will be asked to
rank the degree of success (on a scale from 1 to 100) that they
have made in reaching their original goals that were self-
contracted in their plan of change during phase one. Subjects
will rank their success in goal achievement during the 4th, 7th,
and 13th months of the study.
In addition, agency cultural competence will be assessed using
the Agency Cultural Competence Checklist, ACCC (Dana,
Behn, & Gonwa, 1992). Specifically, the ACCC is an instrument
that is designed to assess social service agency cultural
competence with racial and ethnic minority groups. This
checklist screens for both general cultural competence
throughout the agency and culture-specific content within the
assessment and intervention categories of that same agency.
This instrument will be provided to staff members and to clients
as a means of generating input on the adequacy of services in
meeting minority needs and/or issues of faith or spirituality.
C. Human Subjects Research—Procedures and Protocols. All
procedures as outlined by the Louisiana Office for Addictive
Disorders and the Louisiana Association of Substance Abuse
Counselors and Training (LASACT) will be followed when
administering therapeutic services to clients. All procedures
required by the Human Subjects Review Board of the University
of Louisiana at Monroe will be followed as well. In addition,
data collection/records keepers will ensure that all data is coded
and completely unidentifiable by the researchers or by others
viewing the records. The primary investigator will analyze the
entered data coded by the data collection/records keepers but
will not be familiar with either the physical hardcopy data
sources nor will he or she have identifiable contact with or
knowledge of the clients of each facility who will be the
subjects for this study. It should be noted that Dr. Hanser is a
Licensed Addictions (LAC) and a Licensed Professional
Counselor (LPC) in the State of Louisiana and therefore has a
very good understanding of legal and ethical issues related to
addictions treatment and therapeutic interventions while also
having a strong grasp of research ethics pertaining to human
subject’s safety and confidentiality.Types of Data Collection
The evaluation plan just noted is a bit detailed but was designed
to obtain a blend of different measures and to increase
accountability among treatment staff to ensure that they focus
on the outcomes of their efforts. This blend of different
measures can come in several means but generally fall within
four categories that include direct observation, the use of
interviews, surveys and questionnaires, and official records. A
description of each category was obtained from the BJA and is
presented below:
· 1.Direct Observation: Obtaining data by on-site observation
has the advantage of providing an opportunity to learn in detail
how the project works, the context in which it exists, and what
its various consequences are. However, this type of data
collection can be expensive and time consuming. Observations
conducted by program staff, as opposed to an outside evaluator,
may also suffer from subjectivity.
· 2.Interviews: Interviews are an effective way of obtaining
information about the perceptions of program staff and clients.
An external evaluator will usually conduct interviews with
program managers, staff members, and clients to obtain their
perceptions of how well the program functions. Some of the
disadvantages with conducting interviews are that they tend to
be time consuming and costly. Further, interviews tend to
produce subjective information.
· 3.Surveys and Questionnaires: Surveys of clients can provide
information on attitudes, beliefs, and self-reported behaviors.
An important benefit of surveys is that they provide anonymity
to respondents, which can reduce the likelihood of biased
reporting and increase data validity. There are many limitations
that are associated with surveys and questionnaires, including
the reading level of the client and cultural bias. However, the
use of standardized instruments provides a number of benefits
because they have been tested to ensure at least a modicum of
validity and reliability. The use of standardized surveys,
questionnaires, and instruments enhances the baseline data that
is initially collected and this then adds to the strength of the
evaluation. More information on standardized instruments will
be provided later in this chapter.
· 4.Official Records: Official records and files are one of the
most common sources of data for criminal justice evaluations.
Arrest reports, court files, and prison records all contain much
useful information for assessing program outcomes. Often these
files are automated, making accessing these data easier and less
expensive.
Regardless of the types of data-gathering process that is
ultimately used, evaluators tend to conduct two general types of
agency evaluation: program outcome evaluation and process
evaluation. Program outcome evaluation entails an ongoing
collection of data to determine if a program is successfully
meeting its goals and objectives. In many cases, these measures
address project activities and services delivered. Some
examples of performance measures might include the following:
the number of clients served, changes in attitude, and rates of
recidivism. These types of evaluations tend to measure the
overall outcome of the projects. Effective treatment programs
produce positive outcomes among clients. As would be
expected, these programs generate client change while they
participate in the program, and, in the most successful
programs, client progress continues even after the client is
discharged from a particular treatment regimen. Areas of
evaluation that might be used to demonstrate outcome
effectiveness might include any of the following:
· 1. Cognitive ability (improvements in recall and/or overall
testing scores or times)
· 2. Emotional/affective functioning (such as anxiety and
depression)
· 3. Pro-social attitudes and/or values (such as improved
empathy, honesty, etc.)
· 4. Education and vocational training progress (traditional
achievement tests)
· 5. Behavior (evidenced by observable behaviors).
Process evaluations focus on the implementation of the program
and its day-to-day operations. Typically, process evaluations
address specific processes or procedures that are routinely done
within the agency. In many cases, process evaluation refers to
assessment of the effects of the program on clients while they
are in the program, making it possible to assess the institution’s
intermediary goals. Process evaluation examines aspects of the
program such as:
· 1. The type of services provided
· 2. The frequency of services provided
· 3. Client attendance in individual or group counseling sessions
· 4. The number of clients who are screened, admitted,
reviewed, and discharged
· 5. The percentage of clients who successfully complete
treatment.Sex Offender Treatment Programs (SOTP): The
Importance of Evaluation
One type of treatment program and treatment population who
warrants routine assessment and evaluation would be sex
offender treatment programs and the clients of these programs.
The evaluation of these programs is quite naturally important
because sex offenders have generated a high level of public
concern. Determining whether treatment programs do indeed
“work” or whether they do not do so is paramount to
determining whether this population should be given treatment
in lieu of simple incarceration. Further, effective evaluation
allows programs to improve their implementation. Due to public
safety concerns associated with sex offenders, effective
evaluation has become a very important element in designing
treatment programs for these programs.
Sex offender treatment programs entail a variety of approaches
that are used to prevent convicted sex offenders from
committing future sex offenses. Students should refer
to Chapter 12 on sex offender treatment programs when
considering the evaluation of such programs. As one may recall,
these approaches include different types of therapy, community
notification, and standardized assessments (Bureau of Justice
Assistance, 2008). Given the high level of denial among sex
offenders, it is important that assessment and evaluation
components are able to measure both latent as well as manifest
aspects of sex offender progress in treatment. In other words,
the skilled evaluator will keep in mind that this population is
inherently very manipulative and will need to ensure that their
evaluation model is able to detect deceit and manipulation from
data provided by these offenders.
Evaluations for sex offender treatment programs in prison are
likely to have some differences from those in the community,
particularly since public safety concerns are greater for those
who are in the community. While some scales and processes
will remain the same in both settings, evaluators in community-
based settings will also need to consult with family and
friends of the sex offender much more frequently than in a
prison setting. The reasons for this are simply because such
individuals are likely to have more direct observations of the
offender, their behavior, and their apparent commitment to the
treatment regimen.
Typically, there are three common therapeutic approaches to
treating sex offenders. These approaches include (1) cognitive-
behavioral approach, which focuses on changing thinking
patterns related to sexual offending and changing deviant
patterns of sexual behavior, (2) psychoeducational approach,
which focuses on increasing offenders’ empathy for the victim
while also teaching them to take responsibility for their sexual
offenses, and (3) pharmacological approach, which uses
medication to reduce sexual response. As one may recall
in Chapter 12, the primary types of treatment are cognitive-
behavioral in approach but many may use psychoeducational
aspects as well. The pharmacological approach has not been
discussed in this text and will generally not be an area of
intervention that will require substantial input from the
correctional counselor. It is for this reason that, when
discussing evaluation, we focus our attention on efforts to
evaluate cognitive-behavioral and psychoeducational
interventions.
Beyond the treatment staff, the supervision of sex offenders—
and the evaluation of sex offender treatment programs—should
include all parties who are involved with the case management
of the sex offender, including law enforcement, corrections,
victims (when appropriate), the court, and so on. All of these
personnel can provide very useful information that may not be
readily apparent to the evaluator. The key for the evaluator is to
understand the one vantage point that each party provides from
which he or she can view the sex offender treatment and/or
supervision process. It is the composite picture, made up of the
full range of individual observations, that should be used by the
evaluator. Each party individually can provide valuable
information in assessing the effectiveness and efficacy of the
sex offender treatment program and supervision strategies
(Bureau of Justice Assistance, 2008). Collectively, these parties
provide a multifaceted view of the offender’s progress.
Further, as was noted in Chapter 12, sex offenders are very
manipulative, and even skilled therapists (and community
supervision officers) may have difficulty discerning whether
such an offender is making genuine and sincere progress.
Because of this, it is important for the evaluator to get a
comprehensive “snapshot” of the offender that is
multidimensional in scope. The use of numerous observations
and the comparison of those observations help to ferret out
faulty data provided to the evaluator, whether the faulty data
was provided deliberately (such as from the sex offender
himself or herself) or accidentally/unknowingly from various
personnel working with the offender. Naturally, the more
comprehensive and the more accurate the evaluation, the more
likely that agencies can refine their processes. Refined
processes lead to more effective treatment and this then leads to
increased public safety if the sex offender ceases recidivism due
to effective treatment. Thus, the evaluator is a primary player in
improving community safety through agency assistance in
optimizing their service delivery.
As with our earlier example of an evaluative design for a
substance abuse treatment organization, the use of standardized
assessment instruments with sex offenders can greatly improve
the validity and reliability of the evaluation. Standardized tools
are more effective than “home grown” surveys and
questionnaires because, as we noted in the previous subsection,
they have been tested to ensure that they are valid and reliable
in providing treatment planning information for counselors and
security criteria for correctional administrators and supervision
staff. Thus, standardized assessment tools tend to increase the
likelihood of treatment efficacy and also better identify sex
offenders who are at a heightened risk of recidivism (Bureau of
Justice Assistance, 2008). A more in-depth discussion on the
use of standardized instruments in the evaluation process will
be provided in part two of this chapter. For now, we simply
wish to note their constructive use when conducting
evaluations.
Beyond the use of standardized data-gathering tools, evaluators
tend to also address a number of specific areas of concern for
publicly operated sex offender treatment programs. These areas
of attention, as noted by the BJA (2008), include the following:
· 1. Attrition in sex offender programs with the hope of
increasing the number of offenders who complete treatment
· 2. Identification of offense characteristics that predict
treatment failure
· 3. Development of processes to better track high-risk sex
offenders
· 4. Continual improvement of the validity and reliability of
screening and assessment instruments that are used
· 5. Improving interventions for specific categories of sex
offenders to improve one-size-fits-all treatment orientations.
When conducting evaluations of sex offender treatment
programs, there are a number of program outcome measures that
may be utilized. The program outcome measures noted below
are among those that are more common and provide
administrators with a general idea of what their program
processes produce upon completion of the program:
· 1. Proportion of reconvictions for sexual offenses
· 2. Change in treatment motivation
· 3. Change in treatment engagement
· 4. Increase in offender emotional health or adjustment
· 5. Decrease in pro-offending attitudes
· 6. Decrease in inappropriate sexual drive
· 7. Decrease in aberrant sexual arousal and sexual fantasies.
In addition, process measures provide an understanding of the
day-to-day operations of the treatment program. These types of
measures aid clinical supervisors and agency administrators in
determining specific areas of treatment that work well while
identifying those areas that need some type of modification or
improvement. Some of the common process measures examined
include the following:
· 1. Number of face-to-face contacts between treatment provider
and sex offender
· 2. Number of meetings between the sex offender, therapist,
and probation officer
· 3. Number of visits by probation officers to the home of the
sex offender
· 4. Number of urine screenings for drugs/alcohol
· 5. Number of medication-induced side effects
· 6. Level of community supervision received.
Lastly, the BJA (2008) has noted that there are numerous sex
offender studies with different methodological problems such as
small sample sizes, the lack of equivalence among control and
experimental groups, and the use of low quality assessment
scales. Despite this, some sex offender studies have provided
evidence that suggests that treatment programs used today are
more effective than those used in the 1980s and 1990s. Of
interest is the fact that evaluations that have compared different
therapeutic approaches have consistently demonstrated that
cognitive-behavioral treatment approaches hold particular
promise for reducing sex offender recidivism (Bureau of Justice
Assistance, 2008).
As discussed in Chapter 12, cognitive-behavioral treatment with
sex offenders is often provided in a group setting that focuses
on cognitive distortions, denial of the offense while in
treatment, deviant sexual thoughts and arousal, and a lack of
empathy for victims. These programs lend themselves well to
evaluation due to their clear processes of implementation and
the ease by which those processes can be defined and quantified
for research purposes. However, the ultimate litmus test of
success is whether the sex offender recidivates, particularly
through the commission of another sex offense. It is in this
regard that cognitive-behavioral programs tend to demonstrate
very good program outcome results because these programs tend
to have more frequent and more significant reductions in
recidivism than most other interventions that exist.SECTION
SUMMARY
Evaluative research is very important to treatment agencies
since it is this process (and this process alone) that allows
correctional counseling programs to operate as evidence-based
programs. The use of internal evaluation is what ensures that
counseling processes are in a state of continued refinement and
improvement. This means that the evaluator, in many respects,
must act in an independent fashion when conducting data
collection and the research that will evaluate the agency.
Likewise, the ideal evaluator is one who not only has sufficient
credentials in research and statistical analysis but also has
experience and expertise with the specific type of treatment
program that is being evaluated. This will ensure that the
evaluator will have a good contextual understanding of the
dynamics within the agency and/or the challenges that tend to
be encountered in a given area of treatment service. In addition,
the evaluator should strive to have a cordial and warm rapport
with agency staff, but it is their task to operate in a neutral and
detached manner when determining quantitative outcomes for
the agency.
When designing the evaluation plan, five key elements should
be addressed. These five elements are as follows: (1)
determining the target audience for the evaluation and the
dissemination of its results; (2) identifying the evaluation
questions that should be asked; (3) determining how the
evaluation design will be developed; (4) deciding the type of
data to be collected, how that data will be collected, and by
whom; and (5) articulating the final products of the report that
will be produced. This last element is what will be most
important to the treatment program or facility since this will be
the document that will determine whether the agency is viewed
as a success or a failure (or neither).
Lastly, evaluators must provide measures for both processes and
outcomes within the agency. Process measures are related to the
day-to-day operations within the agency, such as techniques
used in group therapy, number of sessions provided, or number
of weeks that the client is in treatment. Outcome measures
examine the final product once the program has been completed
and might include the behavior of the client, emotional stability
of the client, or a client’s educational achievement while in the
treatment program. In addition, an example of an evaluation
project for a drug treatment program and for a sex offender
treatment program were discussed. These examples
demonstrated several key aspects of evaluation, such as the use
of standardized instruments (discussed in more detail in part
two of this chapter), the use of outcome and process measures
in evaluation, and the need for treatment and evaluative
personnel to work in a collaborative fashion. Lastly, drug
treatment is one of the most often encountered forms of
treatment provided within the correctional setting while sex
offenders are one of the most manipulative offenders whom
correctional counselors will encounter. It is for these reasons
that examples were provided for the evaluation of programs
addressing these types of clinical challenges.LEARNING
CHECK
1.
Cognitive behavioral approaches have great deal of empirical
research that supports their effectiveness with sex offenders.
· a.True
· b.False
2.
Outcome measures examine the day-to-day operations of
treatment programs.
· a.True
· b.False
3.
Direct observation, interviews, surveys and questionnaires, and
official records are the four primary means by which data are
collected for evaluation projects.
· a.True
· b.False
4.
The Addiction Severity Index (ASI) is commonly used in
treatment facilities all over the United States.
· a.True
· b.False
5.
Change in treatment motivation has been identified as a
program outcome useful for many sex offender treatment
programs.
· a.True
· b.FalsePART TWO: CONSIDERATIONS IN FORMING THE
EVALUATIVE DESIGN
The specific approach that a researcher may use to evaluate an
agency may depend on a number of different factors. The needs
of the agency, required reporting to grant funding agencies,
ethical limitations, financial limitations with the research,
process and outcome considerations, and feasibility of
completing the research may all prove to be important factors in
formulating the ultimate evaluative design. These initial
considerations are very important and they will be instrumental
in determining the appropriate approach in evaluation. Further,
for many treatment programs (particularly those that are grant
funded), the results of research projects can be very important
in determining if programs continue to exist. Consider, as an
example, that research related to the effectiveness of juvenile
boot camp programs has tended to show that juvenile boot camp
programs do not provide long-lasting changes in behavior of
delinquent youth. These youth, once released, still tend to
return to their criminal behaviors once they are returned to their
old environments.
When such findings emerge, questions related to the accuracy of
the results may also be generated. This is also just as true when
we find that programs work exceptionally well. In such cases,
we must be able to clearly demonstrate that our findings have
been produced by the phenomenon that we believe have served
as the causal factors. Consider again our example of the
juvenile boot camp observation. How do we know if it is the
structure of the juvenile boot camp intervention that is flawed?
Could it be that juvenile boot camps are well designed and
successful but some other spurious factors were causing
recidivism among these youth? How do we determine and
distinguish between these different potential explanations for
juvenile recidivism after finishing a boot camp program?
Answers to these questions can only be provided if we ensure
that two primary constructs exist within our research. These
constructs are known as validity and reliability.Validity in
Evaluative Research
Validity describes whether an instrument actually measures the
construct or factor that we have intended to measure. For many
students, it may seem strange that one could not know if they
are measuring what they intend to measure; however, the mental
health and counseling fields often are tasked with measuring
concepts that cannot be readily and physically seen. For
instance, the measurement of attitudes may be quite difficult,
particularly if a client is deliberately being deceptive. In
addition, some clinical disorders may consist of symptoms that
also exist with other disorders, thereby making it difficult to
distinguish the disorder that is actually being measured.
Further, some disorders may frequently coexist with other types
of disorders, being so commonly connected that medications
prescribed for one may be similar or identical to those
prescribed for the other. An example of this would be the
disorders of anxiety and depression. In many cases,
psychiatrists may prescribe identical medications for both
disorders. Further, it is frequent for persons with one of these
disorders to also present with the other. Distinguishing whether
a client engages in a behavior due to anxiety responses or
depressive/affective responses may be important from a clinical
perspective. Therefore, whatever measure the treatment program
use it is important that it correctly and accurately discern
between these two disorders if the desire is to optimize
treatment outcomes. Though these two disorders may coexist,
they are actually quite different from one another and
individualized treatment plans must correctly distinguish
between such clinical nuances if effective treatment outcomes
are to be expected. Thus, the process used to distinguish
between disorders must be valid; it must correctly measure the
correct disorder that it is intended to measure without
convoluted outcomes, thereby correctly providing for clinical
diagnoses.
This type of clinical example can become even more important
and even more complicated when other constructs, such as low
self-esteem, are also added into the therapeutic equation.
Indeed, many persons with low self-esteem suffer from either
minor depression, anxiety, or both. The question then becomes
“what is first, the low self-esteem followed by depression
and/or anxiety or the existence of depression and/or anxiety
with corresponding low self-esteem?” In order to correctly
answer this question, one must be able to correctly identify
between both clinical disorders as well as the general construct
of low self-esteem. Only a valid measure will be able to do this.
What is more, this measure must be very sensitive to underlying
differences between disorders and constructs that have many
latent interconnections; this further complicates the ability to
achieve valid measurements but also demonstrates why this is
all the more important. In theory, if you address the primary
issue first, the other issues will tend to also subside on an
exponential basis.
Though there are many more examples of clinical and
nonclinical situations where invalid measures may be
mistakenly used by researchers, we provide this example to
demonstrate the complexity associated with distinguishing valid
results in correctional treatment. We also provide this example
to demonstrate why it is so important to correctly discern
among various disorders and behavioral constructs. This is even
more critical to public safety when behavioral symptoms
include violent and/or medically risky behaviors. Therefore, it
is important that evaluators of mental health programs ensure
that their measures are valid and it is important for clinicians
being evaluated to remain receptive to the requests of evaluators
to provide exacting and detailed specificity as to observed
symptoms, clinical impressions, and other aspects that the
counselor may use to generate his or her own clinical judgments
in treatment.Reliability in Evaluative Research
Reliability is a concept that describes the accuracy of a measure
which in turn describes the accuracy of a study. As an example,
consider again an evaluation where measurements of client
anxiety are taken. A reliable measure would provide a measure
that accurately reflects the level of anxiety and this measure
would consistently be provided over time and throughout
multiple measures if interventions were not provided. This
measure is reliable when it reflects the true level of anxiety that
the client experiences accurately and on a consistent basis. The
ability to gauge the level or intensity of a mental health
symptom (such as anxiety) correctly and consistently over
multiple measurement points makes a process reliable. It is
important to clarify that the consistent reporting of results, in
and of itself, is not the only consideration in determining
reliability. Rather, it is also the ability to provide a measure
that also correctly determines the modulation of that symptom.
For example, a measure may consistently demonstrate that a
client has low levels of anxiety when, in fact, they suffer from
high levels of anxiety. Since the person does, in fact, suffer
from anxiety this measure is valid; it is expected that anxiety is
being measured and the instrument does indeed measure
symptoms of anxiety. However, the instrument is not reliable
because it consistently provides a measure that underrates the
level of anxiety that the client consistently experiences.
Consistently inaccurate measures cannot be considered reliable.
Validity and reliability are absolutely critical to conducting
evaluative research; without them the research is essentially
useless. Research in the field of correctional counseling is
particularly important due to the implications that may emerge
related to public safety and the continuation of programs.
Therefore, the role of evaluators in treatment programs is one
that is very important, both within the lone treatment facility
and when making determinations for the funding of programs
throughout a state or the nation. But the question then emerges,
how do we ensure that the outcomes that are produced are, in
fact, valid and reliable? One effective means of obtaining valid
and reliable data would be to use standardized instruments that
have been specifically designed to ensure that client
information meets acceptable criteria with both constructs.The
Basics of Standardized Treatment Planning and Risk
Assessment Instruments
As has been noted, the use of standardized instruments can add
strength to any evaluation design. These instruments have been
tested through a variety of processes and statistical analyses to
ensure their validity and reliability, when properly used. It is
the last part of the prior sentence—when properly used—that is
important to note for correctional counselors. Many counselors
who have the traditional graduate level education in counseling
(this includes correctional counselors) will tend to have only
one course that deals specifically with testing and assessment.
Further, these programs often only require one class in research
methods and, as is customary among counseling programs
throughout the United States, there will be no specific course in
statistics. This is because many counseling programs are
designed to train therapists, not researchers.
On the other hand, the field of psychology tends to consistently
require at least one research methods course, a separate
statistics course, and will also have at least one (or more)
courses in testing and assessment. Even with this increased
emphasis on statistics and testing processes, persons with only a
master’s degree in psychology are not able to practice without
obtaining some sort of supervision from a Ph.D. level
psychologist. This is despite the fact that counselors with
master’s degree in counseling as well as advanced internships
and practicum are licensed to conduct therapeutic services.
These counselors are typically not qualified to
conduct psychological testing on their own without additional
training and, even then, there are limits to the types of tests that
they may legally administer.
For laypersons and for paraprofessionals, the training in testing
is even less than what is obtained by licensed counselors. In
some treatment settings, paraprofessionals may conduct the
majority of the day-to-day work, and they may even be required
to read and utilize the results from standardized tests when
performing their job. Naturally, these persons are not able to
administer, score, or interpret such tests. They typically will
simply use the results from an appraisal or evaluative specialist
as a tool in treatment planning.
The reason for describing the credentials involved with the use
of standardized tests is to demonstrate that few mental health
professionals are able to administer, score, and interpret these
tests without a doctoral level education. Further, many
correctional treatment settings do not have full-time clinical
psychologists and/or counselors who are qualified to conduct
test administration. Thus, correctional counselors tend to not be
well grounded in an understanding of the basic characteristics
of a sound and empirically designed standardized instrument,
particularly one with psychometric properties. This is an
important point to note and this is precisely why we have
included a brief overview of those characteristics of a valid and
reliable testing mechanism.
Before proceeding further, students should understand that
standardized tests tend to be used for two key purposes in
correctional counseling: treatment planning and security
classification. As has been noted in earlier chapters
(specifically Chapters 1 through 3), correctional counselors
must not only attend to therapeutic concerns of offenders who
are clients, but they must also consider public safety when
determining the prognosis of their clients. In other words, they
must be concerned as to whether their clients will cause
additional harm in society once they are released from a
correctional facility and/or from community supervision.
Because of this, correctional counselors will sometimes deal
with standardized assessment tools that serve both a treatment
planning and a security classification purpose.
Thus, it is useful for correctional counselors (and especially
treatment evaluators) to understand some of the common
principles associated with standardized treatment planning and
classification instruments. A failure to understand these basic
statistical and/or methodological considerations can lead to the
misuse of these instruments among clinicians. James Austin
(2006) provides six basic suggestions for correctional treatment
professionals who may wish to know whether their instruments
are effective. Many of Austin’s comments have to do with the
methodology that was used to construct the testing instrument,
which then relates to the validity and reliability of that given
instrument. Thus, knowing these basic concepts can help
correctional counselors to ensure that instruments that they use
and/or integrate into their treatment planning are appropriate
and this also can ensure that correctional counselors use those
instruments appropriately in their day-to-day operation.
According to Austin (2006), the following points should be
considered when utilizing standardized form for treatment
planning, classification, and/or evaluative purposes:
· 1.Selected Standardized Instruments Must Be Tested on Your
Correctional Population and Separately Normed for Males and
Females. Austin (2006) notes that when assessment tools are
tested on the offender populations in one area of the nation,
they may not be as relevant to offenders in another area. For
example, consider the state of California as compared to the
state of Nebraska. It is likely that the offender populations in
each state will differ, one from the other. Because of this,
treatment programs and treatment program evaluators should
use instruments that are essentially normed on—or tailored to—
the characteristics of offender populations that are similar to
those that they work with. Austin (2006) points out that “in
research terms this issue has to do with the ‘external validity’
of the instrument and the ability to generalize the findings of a
single study of the instrument to other jurisdictions” (p. 1).
Therefore, if an instrument is normed on an offender population
that is substantially different from the one that the evaluator is
assessing, it is likely that the assessment and the evaluation
outcomes will not be as accurate (Hanser, 2009). Further, male
and female offenders differ in both their treatment needs and
security concerns. Characteristics associated with criminal
behavior and prognoses for treatment tend to differ between
male and female offenders (Hanser, 2009). Because of this,
standardized instruments should be different for male and
female offenders or instruments should have built-in
mechanisms that are designed to differentiate between both
populations; but in many cases separate instruments are not
used and typically used instruments do not sufficiently
differentiate between the needs of male and female offenders.
To be reliable, assessment tools must give appropriate weight to
gender differences among offenders, both in treatment planning
and in the evaluative process (Hanser, 2009). Austin (2006)
comments further that “recidivism and career criminal studies
consistently show that females are less involved in criminal
behavior, are less likely to commit violent crimes and are less
likely to recidivate after being placed on probation or parole”
(p. 1).
· 2.Interrater Reliability Tests Must Be Conducted with
Instruments that Are Selected.Austin (2006) states that both an
interraterreliability test and validity test must be completed by
independent researchers prior to using a test for treatment
planning, assessment, or evaluation. Further, these reliability
and validity safeguards should be assured by researchers who
accrue no monetary or political benefit when determining
whether a standardized test is reliable and/or valid
(Austin, 2006; Hanser, 2009). In simple terms, interrater
reliability has to do with the consistency of the results that are
obtained from an instrument. Interrater reliability should
consistently yield the same outcomes regardless of the person
who has conducted the test of the instrument (Hanser, 2009).
This is very important for evaluative research and resounds the
points made earlier in our previous subsection regarding
reliability in the evaluation design.
· 3.A Validity Test Must Be Conducted. As with evaluative
designs, the instruments used in those designs must also be
valid. As has been explained earlier, validity ensures that the
instrument is actually measuring what the evaluator and/or
correctional counselor believe is being measured. As we noted
in our example with valid measures of anxiety (see our earlier
subsection), instruments can provide measures that correlate
with a given issue but the cause of that correlation may be due
to some unknown factor (Hanser, 2009).
· 4.The Instruments Must Allow for Dynamic and Static Risk
Factors. Students should recall from Chapter 3 the distinctions
between dynamic and static risk factors. Dynamic risk factors
include characteristics such as age, marital status, and custody
level (Hanser, 2006, 2009). The key commonality among
dynamic risk factors is that they can and do change over time.
Static risk factors include characteristics such as age at first
arrest, crime seriousness, and prior convictions. Once
established, these characteristics do not fluctuate over time
(Hanser, 2006, 2009). Both of these factors are important for
treatment planning while the offender is on supervision, risk
prediction during release from incarceration, and in evaluating
offender outcomes in treatment programs. For example, one
author of this text who is also an independent evaluator for a
drug treatment center for female offenders sought to determine
if age had a significant correlation with various aspects of
treatment success. In this case, a dynamic risk factor was
utilized to analyze offender outcomes. In addition, this same
evaluator sought to determine if the number of prior convictions
was significantly correlated with treatment success; this is an
example where a static risk factor was used to evaluate client
treatment outcomes.
· 5.Instruments Must Be Compatible with the Skill Level of
Treatment Staff. As was discussed earlier, different treatment
staff will tend to have different levels of credentialing (i.e.,
laypersons, paraprofessionals, counselors and psychologists
with master’s degrees, counselors with doctorate degrees and
specific training in psychometrics, and clinical psychologists
with doctorate degrees). The level of credential can be
important since this determines whether a person may be
qualified to administer a specific test. Indeed, the accuracy of
an assessment instrument can be just as dependent upon the
skill of the person administering the tool as is its construction.
It is not enough for a clinician and/or evaluator to use a well-
developed instrument, but they must also have sufficient
training in statistical analysis, research design, and testing
processes and they must have adequate training before they can
properly administer many standardized tests. Naturally, some
tests are more complicated than others and it is because of this
that different tests may require different levels of credentialed
qualifications. In addition, evaluators must have experience
administering those instruments or instruments similar to those
that they use. Training or education alone is not sufficient;
there is simply no replacement for the skill and familiarity that
is acquired through the process of repetitive administration of a
given instrument. The importance of these qualifications cannot
be overstated. Further, many evaluative efforts may not always
include standardized instruments as they can be costly to
purchase, they may entail high costs in obtaining qualified
personnel, and the process can be complicated and demanding.
However, these costs and drawbacks do not offset the value that
is added to an evaluative design for those agencies who truly
wish to improve their service delivery and the treatment
outcomes of clients in their programs. The importance of
professional qualifications is often evidenced by the fact that
companies such as Western Psychological Services (WPS) and
Psychological Assessment Resources (PAR), two well-known
companies that copyright and sell standardized instruments,
require persons ordering such instruments to provide proof of
their credentials, training, and/or experience with similar
instruments.
· 6.The Assessment Instrument Must Have Face
Validity. Lastly, the instrument and the process of assessment
must be understood and recognized as credible by treatment
staff and clients of the program that is being evaluated. Indeed,
instruments that are only understood by academics will not be
widely accepted by most treatment staff and such instruments
can often confuse offenders who, in many cases, do not have
well-developed reading skills. Further, if the instrument is
perceived as being too “bookish” in nature and not applicable to
the realities of the “street,” so to speak, clients are likely to
view the instrument as artificial and sterile, not really being
able to probe the true reality of what an offender may (or may
not) experience (Hanser, 2009). With this in mind, students
should understand that a lack of “face validity” means that the
instrument is not recognized as valid on its face, or at initial
glance, by those who judge its ability to assess or appraise a set
of characteristics (Hanser, 2009).Ethics in Evaluation
Ethics refers to what is right and wrong in relation to human
conduct. This is a vital component to any research endeavor and
should be taken seriously. At no time should human subjects be
placed in undue harm while attempting to carry out a research
project. One of the best ways to ensure ethical standards is to be
open and honest with participants. Each component of the
research design should be clearly explained to all participants.
And, participants should be given the opportunity to freely
choose whether to consent or refuse to participate in the study.
In addition, great care should be taken to ensure that the
identity of each participant remain anonymous. Three ethical
principles were established by the Department of Health,
Education, and Welfare in 1979 aimed at protecting human
subjects and eliminating human rights violations:
· 1. Respect for persons—treating persons as autonomous agents
and protecting those with diminished autonomy;
· 2. Beneficence—minimizing possible harms and maximizing
benefits;
· 3. Justice—distributing benefits and risks of research fairly
(Schutt, 2006, p. 81).
All research proposals should be reviewed by the appropriate
Institutional Review Board (IRB). The primary purpose of the
IRB is to ensure that ethical standards clearly resonate in all
facets of the proposal and risk to human subjects is minimal.
Especially, when conducting human subject research, IRB
approval is critical. In fact, some research projects may require
IRB approval from multiple agencies. In addition, we strongly
recommend that students visit the APA’s website on “Ethical
Principles of Psychologists and Code of Conduct.” In particular,
evaluators should take heed of Section 8 on “Research and
Publication,” which notes that participants (particularly agency
clients in treatment) informed consent must be provided. The
following is list of points paraphrased from requirements noted
by the American Psychiatric Association (2009) that should be
communicated to clients in treatment who are part of the
evaluation process:
· 1. The purpose of the evaluation, the procedures involved, and
the duration of the evaluative process
· 2. The voluntary nature of participation in the research and
their right to cease participation at any time that they desire
· 3. Any potential consequences of declining or withdrawing
· 4. Possible risks, discomfort, or adverse effects involved (if
any) with participation
· 5. Potential benefits to the client and/or the agency that the
evaluative research might produce
· 6. The general limits of confidentiality (students should refer
back to Chapter 2 for additional information on confidentiality)
· 7. Any incentives provided to get clients to participate
· 8. Information on their rights and notice of a contact person to
who questions can be directed regarding the evaluation
process.Reviewing Evaluation Findings
Once the evaluator has designed and implemented the
evaluation process within a treatment agency, it is not enough
for that person to simply “crunch numbers” and provide
statistical reports. Rather, they must communicate the outcome
of the evaluation and provide feedback and/or suggestions to
treatment personnel so that they can refine their techniques and
approach. Creation of this feedback loop is critical; without it,
the evaluation simply sits stale and useless within the treatment
agency. Because evaluators must interpret and explain their
findings, it is important for the evaluator to have worked as
treatment provider, if at all possible. This allows the evaluator
to understand the nuances and unspoken complications in
providing therapeutic services. Without such insight, evaluators
are limited to a one dimensional understanding of the treatment
process, being restricted to the limitations of their data when
interpreting results.
Beyond the process of collecting data and conducting analyses,
evaluators are often trusted by treatment programs to provide
interpretations and to produce conclusions resulting from their
analysis. Along with this, evaluators may provide
recommendations that are based on the findings. The evaluator,
in providing such recommendations, will usually discuss the
outcome with agency supervisors. In such cases, correctional
counselors would be well served to heed the information
provided by evaluators since their analysis is likely to be free of
the subjective impressions that counselors tend to form of
clients and their clinical situation. This is not to say that, in all
cases, the evaluator’s interpretation of treatment effectiveness
is more accurate than the therapist’s who work in a given
treatment facility. Rather, it is to say that the evaluator’s
observations can serve as a good counterbalance to subjective
observations of program staff. This is perhaps one of the best
means by which clinicians can optimize their interventions and,
in the process, establish their treatment program as being
evidence-based in nature.Incorporating the Evaluation Research
Findings into Therapy
The primary goal of evaluation research is to enhance the
services provided to offenders. We need to know what is
working and what types of interventions are able to enact
meaningful change and help keep offenders out of future contact
with the criminal justice system. This is a critical component
for creating and maintaining credibility of the counseling
profession in working with offenders. Criminal justice is a
discipline that frequently sees the theoretical pendulum swing
from tougher incarceration policies to those more focused on
rehabilitation and counseling. In order for counseling to remain
viable we need to strive toward implementing practices that are
theoretically sound and able to adapt to the peculiarities of
individuals within the offender population and their particular
needs.
Relapse and recidivism are concepts that generally represent
different disciplines but are inextricably connected. In
counseling we use relapse to signify an individual’s
reengagement in problem behavior. In criminal justice we use
recidivism to describe the process of committing a criminal act
that brings an individual back into the justice system. From the
perspective of correctional counseling these concepts are best
viewed as part of a singular process, meaning that, generally,
offenders who recidivate are going to be offenders who have
also relapsed into some type of problem behavior. Indeed,
further proof of the interchangeable nature of these terms is
seen in recent grant Requests for Proposals (RFPs) released by
SAMHSA, where specific grant projects call for programs that
simultaneously address substance abuse relapse and criminal
recidivism.
Correctional counselors will eventually select a style of
counseling that most suits their own personality and expertise.
The selected style of counseling should be one that allows each
counselor to operate from his or her authentic self. In addition
to each counselor’s individual knowledge of his or her
particular therapeutic modality it is very important that
counselors listen to offenders as they share their own reasons
for relapse and recidivism. The offender’s self-reported
reasons for engaging in the behavior that led to his or her arrest
is rich information for the counselor to explore. It may be that
there are intricacies within a story that are unique to an
offender and require specialized interventions that aim to
reframe cognitions and alter behavior. Self-reported data also
provide a good source of validating information that may have
been captured in standardized instruments used by many
facilities at intake. Common standardized assessment
instruments measure an offender’s levels of depression, anxiety,
and trauma. These initial assessment instruments and self-report
data usually provide a baseline from which subsequent
counseling services can be gauged in regard to whether an
offender’s psychological and emotional outlook is improving
(Figure 14.1).Creating a Feedback Loop in Therapy
The process of refining one’s method of counseling should be
constant. Much of the refinement should be based on both
quantitative and qualitative information gained from the process
of interacting with offenders and delivering treatment. When the
data collection process adheres to acceptable standards of
scientific investigation, the data produced should be relied upon
heavily to “drive” future counseling sessions. In essence, the
entire process of counseling offenders is best viewed as a
circular phenomenon that mirrors the process of scientific
inquiry. We begin with a distressed offender and begin the
attempt to understand the particulars of the distress. We then
proceed to the implementation of counseling techniques in an
effort to reduce the distress. During this process we are
constantly evaluating whether the treatment is effective. If the
offender shows signs of improvement based on an intervention
we will likely continue with subsequent application. If the
offender does not seem to be responding well, or improving, it
may be that we need to adjust our methods of intervention and
then reassess after a reasonable period of time. This process
continues until the offender is deemed suitable to proceed
without further treatment.FIGURE 14.1 The Means by which
Data Collection and Evaluation Create Feedback Loops that
Impact Agency Interventions.
Improving Therapy: A Final Note
The best counselors are personally congruent; they are authentic
and provide realness in which discussions and disclosures are
meaningful. Counselors who are not authentic will likely hide
behind the delivery of scripted techniques and sanitized
disclosure incapable of prompting genuine exchange able to
heal old wounds. Counselors must be aware of their own
psychological and emotional needs. Our own ability to attend to
these needs in professional settings models our ability and
willingness to make changes and can be very beneficial to
offenders. Change is frightening for all human beings. But,
imagine the level of trepidation for those offenders who have
never had the opportunity to observe another person take the
risk of disclosing personal information in hopes of a better life.
Counselors have the opportunity to be meaningful change agents
for many of the offenders they encounter. Whether the change
will be meaningful and lasting, however, will in large part hinge
on the counselor’s own psychological and emotional depth. This
is precisely why counselors should take every opportunity to
engage in training aimed at enhancing their own self-
understanding. A guiding question that should always be on the
mind of counselors is: “Would I be willing to do what I am
asking the offender to do?”
Indeed, the process of obtaining continued education is one that
is mandated by most all ethical governing bodies within the
counseling field. This is because the field of counseling
(including correctional counseling) is always changing and
improving. Therefore, when correctional counselors pursue
further education throughout their careers, they are the
benefactors of evaluative research that determines those
approaches that “work” from those that do not. This is a
continual improvement process where one utilizes an approach,
tests that approach, gets results from the test of the approach,
and based on the findings modifies future intervention
approaches. Simply put, counselors must make a point to stay
abreast of such research and to grow along with their discipline.
To fail to do so produces a serious shortcoming in their
competency to provide services and also shows professionally
negligence. Further still, this failure would also be a failure to
our client’s welfare. Thus research is important since it guides
us on how our field and our own individual careers should
develop. In essence, we are all a work in progress and the best
treatment professional is one who knows that they never stop
growing, both personally and professionally. To fail to do so
would essentially mean that we have decided to stop caring.
Nothing could be more contradictory to the spirit, point, and
purpose of the counseling profession.SECTION SUMMARY
When conducting evaluative research, there are a number of
issues to consider prior to starting the actual evaluation. First
and foremost, the evaluator must consider issues related to the
validity and reliability of the research that is conducted.
Without addressing these two important concepts, the evaluation
of the treatment program is likely to have no useful outcome.
One way to facilitate valid and reliable data collection is to use
standardized instruments. Gaining data from clients and staff
through the use of standardized instruments can ensure that at
least a minimal degree of validity and reliability is inherent to
the data that is obtained. However, the simple use of these
instruments does not, in and of itself, ensure that the evaluation
will automatically be successful. The evaluator and relevant
agency staff must be trained on the use of these instruments. If
these instruments are not used properly, the evaluation will
consist of essentially useless information.
Further, ethics in research should be given a priority,
particularly in regard to the boundaries of confidentiality,
ensuring that clients have informed consent prior to
participating in the evaluation process. Once the evaluator has
considered the validity and reliability of the evaluation design
and once they have ensured that ethical safeguards are in place,
they should proceed with the evaluative process. When
completing the evaluation, they should provide feedback to
treatment staff (particularly supervisory clinical staff) to
disseminate the results of their findings. Further, evaluators
should work with treatment staff and administrators to integrate
findings within the processes of the agency’s day-to-day
operations. It is in this manner that feedback loops are built so
that the evaluative process can further aid and support the
continual refinement of treatment interventions.LEARNING
CHECK
1.
Relapse and recidivism are two concepts that should not be
considered related.
· a.True
· b.False
2.
Validity is the ability to get consistent measurements.
· a.True
· b.False
3.
Reliability describes the accuracy of a measure.
· a.True
· b.False
4.
The primary goal of evaluation research is to refine treatment
program efforts aimed at rehabilitating offenders.
· a.True
· b.False
5.
It is not necessary for correctional counselors to understand
evaluation research.
· a.True
· b.FalseCONCLUSION
Research and assessment of correctional counseling programs is
vital. It is through this process that we are able to identify
program strengths and weaknesses that serve to inform the
literature. It is also through the evaluative process that we are
able to determine if our programs actually work to improve
relapse and recidivism rates among offenders. Afterall, if these
programs simply “feel good” but, in reality, provide little actual
and observable benefit to society in general and the offender in
particular, their usefulness is questionable. It is important that
agencies engage in earnest and sincere evaluation and that the
use of evidence-based approaches is emphasized. By being
evidence based, agencies provide means of demonstrating their
positive impact on society and, due to the evidence that they
produce, provide the means by which other agencies can
replicate their practices.
It is important for correctional counselors to understand the
importance of evaluative research and to understand that the
role of the evaluator is one that is helpful. Indeed, the best
evaluator is one who has also worked in the treatment field,
particularly in the same field that is being subjected to their
evaluation. Such evaluators usually are more in tune with the
processes that they evaluate and they are also better able to
interpret and explain outcomes that are observed. Such
evaluators also tend to be effective in explaining their results to
agency staff and demonstrating how future interventions can be
optimized.
Further, it is important that evaluation designs ensure for both
validity and reliability. Where validity ensures that one is
measuring what one intends to measure, reliability ensures that
the measure is accurate in intensity and/or degree of measure
and that the measurement consistently provides these accurate
measures over time. In the field of correctional counseling,
issues that are evaluated require that specific attention is given
to the validity and reliability of the evaluation process. The use
of standardized instruments helps to facilitate this process since
they have been tested for their ability to provide valid and
reliable data. Presuming the evaluator ensures that appropriate
methodological principles are used, evaluations that use
standardized instruments will typically be superior to those that
do not.
Lastly, ethics in research should be maintained by the evaluator.
Just as with correctional counselors, the issue of confidentiality
is important. Clients should be provided full consent as to the
nature of the study and their rights when participating in
research. Though clients will have likely been apprised of their
rights to confidentiality during their initial entry into the
treatment program, research evaluators should also cover these
parameters with clients to ensure that they understand their role,
the nature of the research, and their own right to autonomy.
This is an important issue, particularly in cases where clients
are court mandated. Beyond the participation of clients, agency
staff should be encouraged to participate. In such cases,
evaluators can integrate information from staff to provide a
more multifaceted appraisal of the processes involved within
the treatment facility. Further, staff will ultimately be
participants and recipients of the evaluative output since
agencies will usually find it necessary to consider changes and
modifications to their programs as evaluations of their
effectiveness are provided. It is in this manner, through the
incorporation of evaluative data, that agencies can continually
refine and improve their services and become evidence-based
treatment providers in the truest sense of the term.Essay
Questions
· 1. Why is evaluative research important to improving
correctional counseling processes?
· 2. Discuss the purpose of evaluation research. What might be
some consequences of not conducting evaluation research?
· 3. Why are standardized instruments considered particularly
valuable in evaluative research? What are some necessary
characteristics of standardized assessment tools?
· 4. Discuss the various ethical principles related to conducting
research with offenders. What are some of the recommendations
noted by the American Psychological Association?Treatment
Planning Exercise
For this exercise, you will need to consider your readings in this
chapter as they apply to prior readings from Chapter 8 on
Substance Abuse Counseling and Co-occurring Disorders and
from Chapter 9 on Youth Counseling and Juvenile Offenders.
Your assignment is as follows:
You are a researcher and a correctional counselor who has
recently been hired by the community supervision system in
your area. You have been asked to design and evaluate a
treatment program for adolescent substance abusers that has
been implemented within one of the larger cities in your state.
Specifically, you are asked to examine how various aspects of
social learning theory may lead to learned substance abuse
within families of origin and within juvenile peer groups. With
this in mind, you must then explain how various treatment
options might best address domestic battering issues with this
population. The program that you will evaluate uses all of the
interventions listed in Chapter 8 and you are free to select any
theoretical orientation that you desire from Chapters 5, 6,
or 7 of this text. Lastly, you will need to provide a clear
methodology for testing and evaluating your proposed program,
including such factors as validity and reliability of your study
as well as the validity and reliability of your assessment
instruments (if any), the use of control and experimental groups,
as well as ethical issues that might be involved with conducting
such research.Bibliography
American Psychiatric Association (2000). Diagnostic and
statistical manual of mental disorders. Arlington, VA: American
Psychiatric Association.
Austin, J. (2006). How much risk can we take? The misuse of
risk assessment in corrections. Federal Probation, 20(2).
Retrieved
from: http://www.uscourts.gov/fedprob/September_2006/risk.ht
ml#basics.
Belenko, S. (2001). Research on Drug Courts: A Critical
Review. 2001 Update. New York: National Center on Addiction
and Substance Abuse. Retrieved
from: www.drugpolicy.org/docUploads/2001drug-courts.pdf.
Bureau of Justice Assistance. (2008). Center for program
evaluation and performance measurement. Washington, DC:
Bureau of Justice Assistance. Retrieved
from: http://www.ojp.usdoj.gov/BJA/evaluation/index.html.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and
Quasi-Experimental Designs for Research.Boston, MA:
Houghton Mifflin Company.
Center for Substance Abuse Treatment. (2005). Substance
Abuse Treatment for Adults in the Criminal Justice
System. Treatment Improvement Protocol (TIP) Series 44.
DHHS Publication No. (SMA) 05-4056. Rockville, MD:
Substance Abuse and Mental Health Services Administration.
Dana, R. H., Behn, J. D., & Gonwa, T. (1992). A checklist for
the examination of cultural competence in social service
agencies. Research of Social Work Practice, 2, 220–233.
Hanser, R. D. (2006). Special needs offenders in the
community. Upper Saddle River, NJ: Prentice Hall.
Hanser, R. D. (2009). Community corrections. Belmont, CA:
Sage Publications.
Lempert, R. O., & Visher, C. A. (Eds.). (1987). Randomized
field experiments in criminal justice agencies: Workshop
proceedings. Washington, DC: National Research Council.
McCollister, K. E., & French, M. T. (2001). The economic cost
of substance abuse treatment in criminal justice settings. Miami,
FL: University of Miami. Retrieved
from: www.amityfoundation.com/lib/libarch/CostPrisonTreatme
nt.pdf.
Mire, S. M., Forsyth, C., & Hanser, R. D. (2007). Jail diversion:
Addressing the needs of offenders with mental illness and co-
occurring disorders. Journal of Offender Rehabilitation, 45(1/2),
19–31.
National Institute of Justice. (1992). Evaluating Drug Control
and System Improvement Projects: Guidelines for Projects
Supported by the Bureau of Justice Assistance.
Schutt, R. K. (2006). Investigating the social world: The
process and practice of research (5th ed.). Thousand Oaks, CA:
Pine Forge Press.
Skinner, H. (1995). Drug Abuse Screening Test. Toronto,
Canada: Addiction Research Foundation.
14 Evaluation, Effectiveness, and Offender RecidivismLEARNING OBJECT.docx

More Related Content

Similar to 14 Evaluation, Effectiveness, and Offender RecidivismLEARNING OBJECT.docx

There needs to be a seperate response to each peers posting and it .docx
There needs to be a seperate response to each peers posting and it .docxThere needs to be a seperate response to each peers posting and it .docx
There needs to be a seperate response to each peers posting and it .docxOllieShoresna
 
Data Analysis and Quality Improvement Initiative Proposal .docx
Data Analysis and Quality Improvement Initiative Proposal .docxData Analysis and Quality Improvement Initiative Proposal .docx
Data Analysis and Quality Improvement Initiative Proposal .docxwhittemorelucilla
 
Deliver a 5–7-page analysis of an existing quality improveme.docx
Deliver a 5–7-page analysis of an existing quality improveme.docxDeliver a 5–7-page analysis of an existing quality improveme.docx
Deliver a 5–7-page analysis of an existing quality improveme.docxvickeryr87
 
Basic Guide to Program Evaluation (Including Outcomes Evaluation).docx
Basic Guide to Program Evaluation (Including Outcomes Evaluation).docxBasic Guide to Program Evaluation (Including Outcomes Evaluation).docx
Basic Guide to Program Evaluation (Including Outcomes Evaluation).docxJASS44
 
Evaluation of health programs
Evaluation of health programsEvaluation of health programs
Evaluation of health programsnium
 
Quality Circle.docx
Quality Circle.docxQuality Circle.docx
Quality Circle.docxPALKAMITTAL
 
Data Analysis Quality Improvement Initiative Proposal.docx
Data Analysis Quality Improvement Initiative Proposal.docxData Analysis Quality Improvement Initiative Proposal.docx
Data Analysis Quality Improvement Initiative Proposal.docxstudywriters
 
CHAPTER SIXTEENUnderstanding Context Evaluation and Measureme
CHAPTER SIXTEENUnderstanding Context Evaluation and MeasuremeCHAPTER SIXTEENUnderstanding Context Evaluation and Measureme
CHAPTER SIXTEENUnderstanding Context Evaluation and MeasuremeJinElias52
 
OverviewPrepare an 8 page data analysis and quality improvement .docx
OverviewPrepare an 8 page data analysis and quality improvement .docxOverviewPrepare an 8 page data analysis and quality improvement .docx
OverviewPrepare an 8 page data analysis and quality improvement .docxkarlhennesey
 
Methods Of Program Evaluation. Evaluation Research Is Offered
Methods Of Program Evaluation. Evaluation Research Is OfferedMethods Of Program Evaluation. Evaluation Research Is Offered
Methods Of Program Evaluation. Evaluation Research Is OfferedJennifer Wood
 
Chapter 5 Program Evaluation and Research TechniquesCharlene R. .docx
Chapter 5 Program Evaluation and Research TechniquesCharlene R. .docxChapter 5 Program Evaluation and Research TechniquesCharlene R. .docx
Chapter 5 Program Evaluation and Research TechniquesCharlene R. .docxchristinemaritza
 
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docxeugeniadean34240
 
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docxnovabroom
 
QUESTION 1What are the main streams of influence, according to.docx
QUESTION 1What are the main streams of influence, according to.docxQUESTION 1What are the main streams of influence, according to.docx
QUESTION 1What are the main streams of influence, according to.docxmakdul
 
Capella Data Analysis Quality Improvement Initiative Proposal.docx
Capella Data Analysis Quality Improvement Initiative Proposal.docxCapella Data Analysis Quality Improvement Initiative Proposal.docx
Capella Data Analysis Quality Improvement Initiative Proposal.docxstirlingvwriters
 
1INTERPERSONAL RELATIONS2 1 Aggression and Violence.docx
1INTERPERSONAL RELATIONS2 1 Aggression and Violence.docx1INTERPERSONAL RELATIONS2 1 Aggression and Violence.docx
1INTERPERSONAL RELATIONS2 1 Aggression and Violence.docxfelicidaddinwoodie
 
Physician Online Ratings: Consumerization of Healthcare
Physician Online Ratings:  Consumerization of HealthcarePhysician Online Ratings:  Consumerization of Healthcare
Physician Online Ratings: Consumerization of HealthcareTrustRobin
 

Similar to 14 Evaluation, Effectiveness, and Offender RecidivismLEARNING OBJECT.docx (20)

There needs to be a seperate response to each peers posting and it .docx
There needs to be a seperate response to each peers posting and it .docxThere needs to be a seperate response to each peers posting and it .docx
There needs to be a seperate response to each peers posting and it .docx
 
Data Analysis and Quality Improvement Initiative Proposal .docx
Data Analysis and Quality Improvement Initiative Proposal .docxData Analysis and Quality Improvement Initiative Proposal .docx
Data Analysis and Quality Improvement Initiative Proposal .docx
 
Deliver a 5–7-page analysis of an existing quality improveme.docx
Deliver a 5–7-page analysis of an existing quality improveme.docxDeliver a 5–7-page analysis of an existing quality improveme.docx
Deliver a 5–7-page analysis of an existing quality improveme.docx
 
Agency Site Visit Paper AGENCY SITE VISIT All students will.pdf
Agency Site Visit Paper AGENCY SITE VISIT All students will.pdfAgency Site Visit Paper AGENCY SITE VISIT All students will.pdf
Agency Site Visit Paper AGENCY SITE VISIT All students will.pdf
 
Basic Guide to Program Evaluation (Including Outcomes Evaluation).docx
Basic Guide to Program Evaluation (Including Outcomes Evaluation).docxBasic Guide to Program Evaluation (Including Outcomes Evaluation).docx
Basic Guide to Program Evaluation (Including Outcomes Evaluation).docx
 
Evaluation of health programs
Evaluation of health programsEvaluation of health programs
Evaluation of health programs
 
Quality Circle.docx
Quality Circle.docxQuality Circle.docx
Quality Circle.docx
 
Data Analysis Quality Improvement Initiative Proposal.docx
Data Analysis Quality Improvement Initiative Proposal.docxData Analysis Quality Improvement Initiative Proposal.docx
Data Analysis Quality Improvement Initiative Proposal.docx
 
CHAPTER SIXTEENUnderstanding Context Evaluation and Measureme
CHAPTER SIXTEENUnderstanding Context Evaluation and MeasuremeCHAPTER SIXTEENUnderstanding Context Evaluation and Measureme
CHAPTER SIXTEENUnderstanding Context Evaluation and Measureme
 
OverviewPrepare an 8 page data analysis and quality improvement .docx
OverviewPrepare an 8 page data analysis and quality improvement .docxOverviewPrepare an 8 page data analysis and quality improvement .docx
OverviewPrepare an 8 page data analysis and quality improvement .docx
 
Methods Of Program Evaluation. Evaluation Research Is Offered
Methods Of Program Evaluation. Evaluation Research Is OfferedMethods Of Program Evaluation. Evaluation Research Is Offered
Methods Of Program Evaluation. Evaluation Research Is Offered
 
Chapter 5 Program Evaluation and Research TechniquesCharlene R. .docx
Chapter 5 Program Evaluation and Research TechniquesCharlene R. .docxChapter 5 Program Evaluation and Research TechniquesCharlene R. .docx
Chapter 5 Program Evaluation and Research TechniquesCharlene R. .docx
 
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
 
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
20200507_010443.jpg20200507_010448.jpg20200507_010502.jp.docx
 
QUESTION 1What are the main streams of influence, according to.docx
QUESTION 1What are the main streams of influence, according to.docxQUESTION 1What are the main streams of influence, according to.docx
QUESTION 1What are the main streams of influence, according to.docx
 
Capella Data Analysis Quality Improvement Initiative Proposal.docx
Capella Data Analysis Quality Improvement Initiative Proposal.docxCapella Data Analysis Quality Improvement Initiative Proposal.docx
Capella Data Analysis Quality Improvement Initiative Proposal.docx
 
Program Evaluation
Program EvaluationProgram Evaluation
Program Evaluation
 
1INTERPERSONAL RELATIONS2 1 Aggression and Violence.docx
1INTERPERSONAL RELATIONS2 1 Aggression and Violence.docx1INTERPERSONAL RELATIONS2 1 Aggression and Violence.docx
1INTERPERSONAL RELATIONS2 1 Aggression and Violence.docx
 
Physician Online Ratings: Consumerization of Healthcare
Physician Online Ratings:  Consumerization of HealthcarePhysician Online Ratings:  Consumerization of Healthcare
Physician Online Ratings: Consumerization of Healthcare
 
360-Evaluation Methods
360-Evaluation Methods360-Evaluation Methods
360-Evaluation Methods
 

More from moggdede

CASE STUDY COMMENTARY•  Individual written task in Harvard sty.docx
CASE STUDY COMMENTARY•  Individual written task in Harvard sty.docxCASE STUDY COMMENTARY•  Individual written task in Harvard sty.docx
CASE STUDY COMMENTARY•  Individual written task in Harvard sty.docxmoggdede
 
Case Study Chapter 5 100 wordsTranscultural Nursing in the.docx
Case Study Chapter 5 100 wordsTranscultural Nursing in the.docxCase Study Chapter 5 100 wordsTranscultural Nursing in the.docx
Case Study Chapter 5 100 wordsTranscultural Nursing in the.docxmoggdede
 
Case Study Chapter 10 Boss, We’ve got a problemBy Kayla Cur.docx
Case Study Chapter 10 Boss, We’ve got a problemBy Kayla Cur.docxCase Study Chapter 10 Boss, We’ve got a problemBy Kayla Cur.docx
Case Study Chapter 10 Boss, We’ve got a problemBy Kayla Cur.docxmoggdede
 
CASE STUDY Caregiver Role Strain Ms. Sandra A. Sandra, a 47-year-o.docx
CASE STUDY Caregiver Role Strain Ms. Sandra A. Sandra, a 47-year-o.docxCASE STUDY Caregiver Role Strain Ms. Sandra A. Sandra, a 47-year-o.docx
CASE STUDY Caregiver Role Strain Ms. Sandra A. Sandra, a 47-year-o.docxmoggdede
 
Case Study Answers Week 7 and 8Group OneIn your grou.docx
Case Study Answers Week 7 and 8Group OneIn your grou.docxCase Study Answers Week 7 and 8Group OneIn your grou.docx
Case Study Answers Week 7 and 8Group OneIn your grou.docxmoggdede
 
Case Study and Transition Plan TemplateCase StudyD.docx
Case Study and Transition Plan TemplateCase StudyD.docxCase Study and Transition Plan TemplateCase StudyD.docx
Case Study and Transition Plan TemplateCase StudyD.docxmoggdede
 
Case Study AnalysisRead Compassion for Samantha Case Study.docx
Case Study AnalysisRead Compassion for Samantha Case Study.docxCase Study AnalysisRead Compassion for Samantha Case Study.docx
Case Study AnalysisRead Compassion for Samantha Case Study.docxmoggdede
 
Case Study AnalysisAn understanding of cells and cell behavi.docx
Case Study AnalysisAn understanding of cells and cell behavi.docxCase Study AnalysisAn understanding of cells and cell behavi.docx
Case Study AnalysisAn understanding of cells and cell behavi.docxmoggdede
 
Case Study Analysis and FindingsThe final assignment for this co.docx
Case Study Analysis and FindingsThe final assignment for this co.docxCase Study Analysis and FindingsThe final assignment for this co.docx
Case Study Analysis and FindingsThe final assignment for this co.docxmoggdede
 
Case Study Analysis A TutorialWhat is it Case studies are a .docx
Case Study Analysis  A TutorialWhat is it  Case studies are a .docxCase Study Analysis  A TutorialWhat is it  Case studies are a .docx
Case Study Analysis A TutorialWhat is it Case studies are a .docxmoggdede
 
Case Study AlcoholCertain occasional behaviors can cause more tro.docx
Case Study AlcoholCertain occasional behaviors can cause more tro.docxCase Study AlcoholCertain occasional behaviors can cause more tro.docx
Case Study AlcoholCertain occasional behaviors can cause more tro.docxmoggdede
 
Case study A group of nurse educators are having a discussion about.docx
Case study A group of nurse educators are having a discussion about.docxCase study A group of nurse educators are having a discussion about.docx
Case study A group of nurse educators are having a discussion about.docxmoggdede
 
Case study ;1Callista Roy and Betty Neumans theories view the.docx
Case study ;1Callista Roy and Betty Neumans theories view the.docxCase study ;1Callista Roy and Betty Neumans theories view the.docx
Case study ;1Callista Roy and Betty Neumans theories view the.docxmoggdede
 
Case Study 9Running head BP & THE GULF OF MEXICO OIL SPILLC.docx
Case Study 9Running head BP & THE GULF OF MEXICO OIL SPILLC.docxCase Study 9Running head BP & THE GULF OF MEXICO OIL SPILLC.docx
Case Study 9Running head BP & THE GULF OF MEXICO OIL SPILLC.docxmoggdede
 
Case Study 9-1 IT Governance at University of the Southeast. Answer .docx
Case Study 9-1 IT Governance at University of the Southeast. Answer .docxCase Study 9-1 IT Governance at University of the Southeast. Answer .docx
Case Study 9-1 IT Governance at University of the Southeast. Answer .docxmoggdede
 
Case Study 7-2 Sony Pictures The Criminals Won. Answer question 2 W.docx
Case Study 7-2 Sony Pictures The Criminals Won. Answer question 2 W.docxCase Study 7-2 Sony Pictures The Criminals Won. Answer question 2 W.docx
Case Study 7-2 Sony Pictures The Criminals Won. Answer question 2 W.docxmoggdede
 
Case Study 8.1 Team DenialEmory University Holocaust studies pr.docx
Case Study 8.1 Team DenialEmory University Holocaust studies pr.docxCase Study 8.1 Team DenialEmory University Holocaust studies pr.docx
Case Study 8.1 Team DenialEmory University Holocaust studies pr.docxmoggdede
 
Case Study 7 Solving Team Challenges at DocSystems Billing, Inc.docx
Case Study 7 Solving Team Challenges at DocSystems Billing, Inc.docxCase Study 7 Solving Team Challenges at DocSystems Billing, Inc.docx
Case Study 7 Solving Team Challenges at DocSystems Billing, Inc.docxmoggdede
 
Case Study 5.2 Hiding the Real Story at Midwestern Community Acti.docx
Case Study 5.2 Hiding the Real Story at Midwestern Community Acti.docxCase Study 5.2 Hiding the Real Story at Midwestern Community Acti.docx
Case Study 5.2 Hiding the Real Story at Midwestern Community Acti.docxmoggdede
 
Case Study 5.1Write a 3 to 4 (not including title or reference.docx
Case Study 5.1Write a 3 to 4 (not including title or reference.docxCase Study 5.1Write a 3 to 4 (not including title or reference.docx
Case Study 5.1Write a 3 to 4 (not including title or reference.docxmoggdede
 

More from moggdede (20)

CASE STUDY COMMENTARY•  Individual written task in Harvard sty.docx
CASE STUDY COMMENTARY•  Individual written task in Harvard sty.docxCASE STUDY COMMENTARY•  Individual written task in Harvard sty.docx
CASE STUDY COMMENTARY•  Individual written task in Harvard sty.docx
 
Case Study Chapter 5 100 wordsTranscultural Nursing in the.docx
Case Study Chapter 5 100 wordsTranscultural Nursing in the.docxCase Study Chapter 5 100 wordsTranscultural Nursing in the.docx
Case Study Chapter 5 100 wordsTranscultural Nursing in the.docx
 
Case Study Chapter 10 Boss, We’ve got a problemBy Kayla Cur.docx
Case Study Chapter 10 Boss, We’ve got a problemBy Kayla Cur.docxCase Study Chapter 10 Boss, We’ve got a problemBy Kayla Cur.docx
Case Study Chapter 10 Boss, We’ve got a problemBy Kayla Cur.docx
 
CASE STUDY Caregiver Role Strain Ms. Sandra A. Sandra, a 47-year-o.docx
CASE STUDY Caregiver Role Strain Ms. Sandra A. Sandra, a 47-year-o.docxCASE STUDY Caregiver Role Strain Ms. Sandra A. Sandra, a 47-year-o.docx
CASE STUDY Caregiver Role Strain Ms. Sandra A. Sandra, a 47-year-o.docx
 
Case Study Answers Week 7 and 8Group OneIn your grou.docx
Case Study Answers Week 7 and 8Group OneIn your grou.docxCase Study Answers Week 7 and 8Group OneIn your grou.docx
Case Study Answers Week 7 and 8Group OneIn your grou.docx
 
Case Study and Transition Plan TemplateCase StudyD.docx
Case Study and Transition Plan TemplateCase StudyD.docxCase Study and Transition Plan TemplateCase StudyD.docx
Case Study and Transition Plan TemplateCase StudyD.docx
 
Case Study AnalysisRead Compassion for Samantha Case Study.docx
Case Study AnalysisRead Compassion for Samantha Case Study.docxCase Study AnalysisRead Compassion for Samantha Case Study.docx
Case Study AnalysisRead Compassion for Samantha Case Study.docx
 
Case Study AnalysisAn understanding of cells and cell behavi.docx
Case Study AnalysisAn understanding of cells and cell behavi.docxCase Study AnalysisAn understanding of cells and cell behavi.docx
Case Study AnalysisAn understanding of cells and cell behavi.docx
 
Case Study Analysis and FindingsThe final assignment for this co.docx
Case Study Analysis and FindingsThe final assignment for this co.docxCase Study Analysis and FindingsThe final assignment for this co.docx
Case Study Analysis and FindingsThe final assignment for this co.docx
 
Case Study Analysis A TutorialWhat is it Case studies are a .docx
Case Study Analysis  A TutorialWhat is it  Case studies are a .docxCase Study Analysis  A TutorialWhat is it  Case studies are a .docx
Case Study Analysis A TutorialWhat is it Case studies are a .docx
 
Case Study AlcoholCertain occasional behaviors can cause more tro.docx
Case Study AlcoholCertain occasional behaviors can cause more tro.docxCase Study AlcoholCertain occasional behaviors can cause more tro.docx
Case Study AlcoholCertain occasional behaviors can cause more tro.docx
 
Case study A group of nurse educators are having a discussion about.docx
Case study A group of nurse educators are having a discussion about.docxCase study A group of nurse educators are having a discussion about.docx
Case study A group of nurse educators are having a discussion about.docx
 
Case study ;1Callista Roy and Betty Neumans theories view the.docx
Case study ;1Callista Roy and Betty Neumans theories view the.docxCase study ;1Callista Roy and Betty Neumans theories view the.docx
Case study ;1Callista Roy and Betty Neumans theories view the.docx
 
Case Study 9Running head BP & THE GULF OF MEXICO OIL SPILLC.docx
Case Study 9Running head BP & THE GULF OF MEXICO OIL SPILLC.docxCase Study 9Running head BP & THE GULF OF MEXICO OIL SPILLC.docx
Case Study 9Running head BP & THE GULF OF MEXICO OIL SPILLC.docx
 
Case Study 9-1 IT Governance at University of the Southeast. Answer .docx
Case Study 9-1 IT Governance at University of the Southeast. Answer .docxCase Study 9-1 IT Governance at University of the Southeast. Answer .docx
Case Study 9-1 IT Governance at University of the Southeast. Answer .docx
 
Case Study 7-2 Sony Pictures The Criminals Won. Answer question 2 W.docx
Case Study 7-2 Sony Pictures The Criminals Won. Answer question 2 W.docxCase Study 7-2 Sony Pictures The Criminals Won. Answer question 2 W.docx
Case Study 7-2 Sony Pictures The Criminals Won. Answer question 2 W.docx
 
Case Study 8.1 Team DenialEmory University Holocaust studies pr.docx
Case Study 8.1 Team DenialEmory University Holocaust studies pr.docxCase Study 8.1 Team DenialEmory University Holocaust studies pr.docx
Case Study 8.1 Team DenialEmory University Holocaust studies pr.docx
 
Case Study 7 Solving Team Challenges at DocSystems Billing, Inc.docx
Case Study 7 Solving Team Challenges at DocSystems Billing, Inc.docxCase Study 7 Solving Team Challenges at DocSystems Billing, Inc.docx
Case Study 7 Solving Team Challenges at DocSystems Billing, Inc.docx
 
Case Study 5.2 Hiding the Real Story at Midwestern Community Acti.docx
Case Study 5.2 Hiding the Real Story at Midwestern Community Acti.docxCase Study 5.2 Hiding the Real Story at Midwestern Community Acti.docx
Case Study 5.2 Hiding the Real Story at Midwestern Community Acti.docx
 
Case Study 5.1Write a 3 to 4 (not including title or reference.docx
Case Study 5.1Write a 3 to 4 (not including title or reference.docxCase Study 5.1Write a 3 to 4 (not including title or reference.docx
Case Study 5.1Write a 3 to 4 (not including title or reference.docx
 

Recently uploaded

UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024Borja Sotomayor
 
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...Nguyen Thanh Tu Collection
 
How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17Celine George
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽中 央社
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMELOISARIVERA8
 
How to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptxHow to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptxCeline George
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxMarlene Maheu
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptNishitharanjan Rout
 
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptx
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptxAnalyzing and resolving a communication crisis in Dhaka textiles LTD.pptx
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptxLimon Prince
 
How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17Celine George
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...EADTU
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesAmanpreetKaur157993
 
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportBasic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportDenish Jangid
 
Observing-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptxObserving-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptxAdelaideRefugio
 
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...Nguyen Thanh Tu Collection
 
diagnosting testing bsc 2nd sem.pptx....
diagnosting testing bsc 2nd sem.pptx....diagnosting testing bsc 2nd sem.pptx....
diagnosting testing bsc 2nd sem.pptx....Ritu480198
 
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading RoomSternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading RoomSean M. Fox
 

Recently uploaded (20)

UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024UChicago CMSC 23320 - The Best Commit Messages of 2024
UChicago CMSC 23320 - The Best Commit Messages of 2024
 
OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...
 
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
24 ĐỀ THAM KHẢO KÌ THI TUYỂN SINH VÀO LỚP 10 MÔN TIẾNG ANH SỞ GIÁO DỤC HẢI DƯ...
 
How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
 
How to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptxHow to Manage Website in Odoo 17 Studio App.pptx
How to Manage Website in Odoo 17 Studio App.pptx
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptx
 
VAMOS CUIDAR DO NOSSO PLANETA! .
VAMOS CUIDAR DO NOSSO PLANETA!                    .VAMOS CUIDAR DO NOSSO PLANETA!                    .
VAMOS CUIDAR DO NOSSO PLANETA! .
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.ppt
 
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptx
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptxAnalyzing and resolving a communication crisis in Dhaka textiles LTD.pptx
Analyzing and resolving a communication crisis in Dhaka textiles LTD.pptx
 
How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17How to Send Pro Forma Invoice to Your Customers in Odoo 17
How to Send Pro Forma Invoice to Your Customers in Odoo 17
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategies
 
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportBasic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
 
Observing-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptxObserving-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptx
 
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
 
diagnosting testing bsc 2nd sem.pptx....
diagnosting testing bsc 2nd sem.pptx....diagnosting testing bsc 2nd sem.pptx....
diagnosting testing bsc 2nd sem.pptx....
 
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading RoomSternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
Sternal Fractures & Dislocations - EMGuidewire Radiology Reading Room
 
Including Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdfIncluding Mental Health Support in Project Delivery, 14 May.pdf
Including Mental Health Support in Project Delivery, 14 May.pdf
 

14 Evaluation, Effectiveness, and Offender RecidivismLEARNING OBJECT.docx

  • 1. 14 Evaluation, Effectiveness, and Offender RecidivismLEARNING OBJECTIVES After reading this chapter, you will be able to: · 1. Discuss the importance of evaluative research and the role of the independent evaluator. · 2. Explain the importance of quantitative processes in determining if treatment programs are evidence based in their practice. · 3. Explain how validity and reliability are important to the evaluation process. · 4. Identify some standardized instruments and explain why they are beneficial to the evaluation of treatment programs. · 5. Explain how evaluations of drug treatment programs and sex offender treatment programs might be conducted. · 6. Identify some of the ethical considerations when conducting evaluative research in mental health settings. · 7. Discuss how evaluation results can be used to improve treatment program processes and outcomes.PART ONE: INTRODUCTION TO THE EVALUATION PROCESS When examining any program, whether therapeutic or otherwise, one of the first questions asked by politicians, policy makers, program administrators, and government officials is, “Does the treatment program work?” In such a case, the underlying desire is to know if money spent on a program is money that is well spent. In such cases, treatment providers will often be required to provide some sort of empirical “evidence” that the program is effective. This is often referred to as evidence-based program delivery. Treatment providers are increasingly being asked to demonstrate the effectiveness of their programs, particularly when such programs are grant funded. In turn, many correctional treatment programs seek money from grant-generating agencies, and, when they have some sort of documented program success, they increase their odds of securing such funds.
  • 2. However, before going further, we would like to make one observation regarding correctional counseling and research. We believe that programs are best evaluated by researchers who themselves are treatment providers. This is particularly true if the researcher has had specific experience with the type of population that is the subject of the program evaluation. Both authors have conducted grant-funded evaluation research of treatment programs and have also studied and/or worked in a variety of treatment fields. One author in particular has worked with most typologies of offenders who have been presented in this text and has also conducted numerous evaluative studies of treatment programs that provide services to those offenders. We believe that this is important because such a practitioner is able to make sense of data that may seem confusing, uncertain, or contradictory, simply because they understand how the program and/or process of treatment intervention works within a given agency and/or with a specific offender population. With this said, it is at this point that we now turn our attention to the notion of evaluation research.Evaluation Research For the purposes of this text, we will refer to the Center for Program Evaluation and Performance Management which is a clearinghouse on evaluative research offered through the Bureau of Justice Assistance (BJA). This source, available online and referenced in this text, provides the reader with a very good overview of the evaluation process and also provides a number of examples pertaining to the evaluation of criminal justice and treatment programs. Because this is a federal government website, the information therein is public domain. In addition, we believe that this site provides a very clear, succinct, and effective overview of evaluation research from the eyes of the practitioner. It is for these reasons that this chapter is constructed from much of the organization and structure of the BJA website, providing the basics of evaluation research along with our own insights as to how that information is useful to correctional counselors.
  • 3. Evaluation is a systematic and objective method for testing the success (or failure) of a given program. The primary purpose of conducting evaluative research is to determine if the intervention program is achieving its stated goals and objectives. In the field of correctional counseling, this is actually very important. It is the observation of the first author of this text that, in many cases, treatment programs provide their services but are not truly aware of whether they have actually “fixed” their clients; this is an important point to address. Treatment agencies must be able and willing to demonstrate the effectiveness of their program’s intervention and this effectiveness should be expressed in quantitative terms. A failure to do so consists of negligence on the part of the agency and also leads to a potential public safety problem. Indeed, if the program does not truly work to reform offenders but the treatment staff continue to operate as if it does, offenders who are risks to public safety will just continue to enter society unchanged and just as dangerous or problematic as before. Often, counselors and other personnel primarily geared toward offering therapeutic services do not necessarily understand the purpose of evaluative research. In addition, it is not uncommon for such practitioners to also discount the contributions of an evaluator, claiming that the evaluator cannot possibly know (better than themselves) whether clients are “getting better,” so to speak. However, this is often based on intuition on the part of the therapist and is also not grounded in objective and detached observation. Evaluative research seeks to look at the process and outcome of correctional counseling in an objective and detached measure to determine the objective truth as to the efficacy of a given program. All too often, treatment staff may provide anecdotal evidence and/or selected cases of success. This should be avoided as this is not sufficient to demonstrate effectiveness and as too much is
  • 4. left to interpretation. Rather, it is important that evaluations of therapeutic programs be conducted by persons who are neutral and detached from the delivery of therapeutic services and it is also important that quantitative as well as qualitative measures be included in that evaluation. Qualitative measures are those that are not numerical in nature and are based more on the context and circumstances of the observation. For instance, clinical case notes, open-ended interviews, and therapist observations would be examples of qualitative observations. On the other hand, quantitative measures are those that have a numerical quantity attached to them. Quantitative measures are those derived from standardized instruments that provide a numerical value to the information gathered from a client.Working with an Outside Evaluator One of the first issues that agencies will need to consider is whether to use an evaluation expert and whether that person can be from within the agency or whether they should instead come outside of the agency being evaluated. If the agency has funding available, it is recommended that they find a trained and experienced evaluator; such a person can be of great assistance to the treatment program throughout the evaluation process. However, it should be noted that agencies and agency staff must be receptive to the efforts of the evaluator. In many cases, agency staff may be defensive and/or guarded when providing information or records. In such cases, it is imperative that agency leadership ensure that hindrances to data collection and the communication of client outcomes be sufficiently addressed. Regardless of whether the evaluator is from within or outside the agency, it is important that a trained and qualified evaluator be identified and secured. A failure to achieve this basic ingredient of the evaluation process will mean that counselors, clinicians, and perhaps clients, will “feel” as if the treatment regimen is working but they will not be able to provide any type of evidence-based support for their opinions. Obviously, this is not scientifically sound nor is it convincing to any potential
  • 5. skeptic who might examine the agency. Lastly, a qualified evaluator should have experience in evaluating treatment programs and, ideally, should have experience in evaluating treatment programs similar to the one operated by the agency in question. The evaluator should also attempt to balance the needs and concerns of various decision makers with the need for objectivity while conducting the evaluation. Once it has been determined that the agency is ready for evaluation and who the evaluator will be, the process of developing an evaluation plan begins. Basically, an evaluation plan describes the process that will be used to conduct an evaluation of the treatment program (Bureau of Justice Assistance, 2008). According to the BJA (2008), key elements of an evaluation plan that should be addressed are (1) determining the target audience for the evaluation and the dissemination of its results; (2) identifying the evaluation questions that should be asked; (3) determining how the evaluation design will be developed; (4) deciding the type of data to be collected, how that data will be collected, and by whom; and (5) articulating the final products of the report that will be produced. Lastly, the evaluation plan should detail the roles of various individuals who will contribute to the evaluation process; these individuals include the evaluator, the agency management, treatment staff, clients, family members of clients, and any other persons impacted by the research. Likewise, an ideal evaluator will have had experience in delivery of therapeutic services that are the same or similar to those provided by the agency. This is important because it provides the evaluator with additional insight behind the data that is generated. Such insight can lead to a particularly useful blend of observations that dwell betwixt the world of the clinical practitioner and the academic researcher; this is the
  • 6. strongest and most useful type of evaluative research that can be produced.Quantitative Evaluation of a Drug Treatment Program An example of an evaluation plan that uses both quantitative and qualitative aspects of measurement is provided in the following evaluation description. This information consists of an evaluation model that the first author designed while working as an evaluator at a local drug treatment facility. This evaluation design demonstrates how the treatment staff and the evaluator may both provide observations, but it is the use of standardized instruments and collection methods that serve as the primary data used to determine client progress. (The use of standardized tools will be discussed later in this chapter.) Further, this example demonstrates that measures, to be effective, must be taken over a long period of time and among many different sources (i.e., agency staff, the evaluator, and/or family and friends of the client). It is in this manner that a composite profile of the client’s overall progress is developed. A. Evaluative Methods. This research design will follow a simple time-series design with repeated measures over the period of the grant-funded period. It is expected that the evaluative design will allow the agency to address all related program outcome questions as well as process questions, as required by this grant-funding opportunity. During the grant- funded period, weekly staff observations will be conducted to track client progress through the use of an evaluative rubric that is based on the basic tenets of operant conditioning strategies. When observing client progress, staff will ensure that their noted input is structured in such a manner as to optimize measurability while including contextual, subjective, and qualitative data that is deemed clinically useful or relevant. Further, staff will be required to provide a list of intervention techniques and behavior management tools that utilize each of the four categories. B. Data Collection Instruments. In addition, several pretest and post-test measures will be taken to assess both the subject’s
  • 7. recovery from alcohol or drug abuse and to assess their improvement in their other co-occurring mental health diagnoses. In addition to quantitative assessments of both of these areas of client outcome, semistructured qualitative client observations will be conducted by various staff at the pretest and post-test stages. One of these forms of interview is known as the Addiction Severity Index (ASI) and is commonly used in treatment facilities all over the United States. This will serve as an initial data collection process on clients and it is expected that this data will be more useful to treatment staff than to those having research objectives. Four other measurement scales will be utilized at intake and at discharge (three months) of the first phase of treatment. These scales are as follows: The Drug Abuse Screening Test (Skinner, 1995), which is a widely recognized scale providing a quantitative index of the degree of problems related to drug and/or alcohol dependency. The Substance Abuse Subtle Screening Instrument (SASSI) is a screening measure that provides interpretations of client profiles and aids in developing hypotheses that clinicians or researchers may find useful in understanding persons in treatment. The Behaviors, Attitudes, Drinking, & Driving Scale (BADDS) will be administered at intake, program completion, and the three-month follow-up period. The BAADS is an evidence-based pre- and post-test psychological questionnaire that measures attitudes, behaviors, and intervention effectiveness related to impaired driving. Optionally, the Maryland Addictions Questionnaire(Western Psychological Services) may be given at intake. This scale determines severity of addiction; the motivation of the client; the risk of relapse; and treatment complications related to cognitive difficulties, anxiety, or depression. When and where feasible, these scales will likewise be utilized with clients at the 6-month, 9-month, and 12-month periods for subjects in treatment. In addition, weekly observations will be conducted by staff and
  • 8. these observations will be provided in weekly case notes. Staff at the facility will specifically focus on observable and behavioral elements of the client’s progress as this is considered a better method of judging the client’s progress than are deductions that are made from the client’s self-proclaimed introspective work. The staff at the facility are already accustomed to this approach of case review and will simply restrict their observations (particularly those placed in writing) to that which is observed through overt client behavior without any inference being drawn beyond what is clearly observable and thus measurable. This should not be a problem since the state of Louisiana already encourages this type of reference when compiling case notes and client progress evaluations. Further, the Substance Abuse Relapse Assessment (Psychological Assessment Resources) will be administered to subjects at the 3-, 6-, 9-, and 12-month periods. This instrument is a structured interview developed for use by substance abuse treatment professionals to help recovering individuals recognize signs of relapse (Psychological Assessment Resources). Likewise, staff will conduct follow-up interviews during this period of time to provide an overall GAF scale rating for prior clients during the 3-, 6-, 9-, and 12-month period of the study. This will provide an additional metric (ratio data) measure during the aftercare stages of treatment. Staff will also be asked to rank the degree of success (on a scale from 1 to 100) that clients have made in reaching their original goals that were self-contracted in their plan of change. Staff will rank client success in goal achievement during the 4th, 7th, and 13th months of the study. Upon completion of phase one, measures will also be taken at the close of the 4th, 7th, and 13th months through an informal survey of friends and family to determine if the subject is engaging in self-management strategies that were taught during phase one. These individuals will also be asked to rank the
  • 9. degree of success (on a scale from 1 to 100) that clients have made in reaching their original goals that were self-contracted in their plan of change. The information from these surveys will be triangulated with the information obtained from staff using the GAF checklist to provide a multidimensional view of the subject’s progress. Further, subjects themselves will be asked to rank the degree of success (on a scale from 1 to 100) that they have made in reaching their original goals that were self- contracted in their plan of change during phase one. Subjects will rank their success in goal achievement during the 4th, 7th, and 13th months of the study. In addition, agency cultural competence will be assessed using the Agency Cultural Competence Checklist, ACCC (Dana, Behn, & Gonwa, 1992). Specifically, the ACCC is an instrument that is designed to assess social service agency cultural competence with racial and ethnic minority groups. This checklist screens for both general cultural competence throughout the agency and culture-specific content within the assessment and intervention categories of that same agency. This instrument will be provided to staff members and to clients as a means of generating input on the adequacy of services in meeting minority needs and/or issues of faith or spirituality. C. Human Subjects Research—Procedures and Protocols. All procedures as outlined by the Louisiana Office for Addictive Disorders and the Louisiana Association of Substance Abuse Counselors and Training (LASACT) will be followed when administering therapeutic services to clients. All procedures required by the Human Subjects Review Board of the University of Louisiana at Monroe will be followed as well. In addition, data collection/records keepers will ensure that all data is coded and completely unidentifiable by the researchers or by others viewing the records. The primary investigator will analyze the entered data coded by the data collection/records keepers but will not be familiar with either the physical hardcopy data sources nor will he or she have identifiable contact with or
  • 10. knowledge of the clients of each facility who will be the subjects for this study. It should be noted that Dr. Hanser is a Licensed Addictions (LAC) and a Licensed Professional Counselor (LPC) in the State of Louisiana and therefore has a very good understanding of legal and ethical issues related to addictions treatment and therapeutic interventions while also having a strong grasp of research ethics pertaining to human subject’s safety and confidentiality.Types of Data Collection The evaluation plan just noted is a bit detailed but was designed to obtain a blend of different measures and to increase accountability among treatment staff to ensure that they focus on the outcomes of their efforts. This blend of different measures can come in several means but generally fall within four categories that include direct observation, the use of interviews, surveys and questionnaires, and official records. A description of each category was obtained from the BJA and is presented below: · 1.Direct Observation: Obtaining data by on-site observation has the advantage of providing an opportunity to learn in detail how the project works, the context in which it exists, and what its various consequences are. However, this type of data collection can be expensive and time consuming. Observations conducted by program staff, as opposed to an outside evaluator, may also suffer from subjectivity. · 2.Interviews: Interviews are an effective way of obtaining information about the perceptions of program staff and clients. An external evaluator will usually conduct interviews with program managers, staff members, and clients to obtain their perceptions of how well the program functions. Some of the disadvantages with conducting interviews are that they tend to be time consuming and costly. Further, interviews tend to produce subjective information. · 3.Surveys and Questionnaires: Surveys of clients can provide information on attitudes, beliefs, and self-reported behaviors. An important benefit of surveys is that they provide anonymity to respondents, which can reduce the likelihood of biased
  • 11. reporting and increase data validity. There are many limitations that are associated with surveys and questionnaires, including the reading level of the client and cultural bias. However, the use of standardized instruments provides a number of benefits because they have been tested to ensure at least a modicum of validity and reliability. The use of standardized surveys, questionnaires, and instruments enhances the baseline data that is initially collected and this then adds to the strength of the evaluation. More information on standardized instruments will be provided later in this chapter. · 4.Official Records: Official records and files are one of the most common sources of data for criminal justice evaluations. Arrest reports, court files, and prison records all contain much useful information for assessing program outcomes. Often these files are automated, making accessing these data easier and less expensive. Regardless of the types of data-gathering process that is ultimately used, evaluators tend to conduct two general types of agency evaluation: program outcome evaluation and process evaluation. Program outcome evaluation entails an ongoing collection of data to determine if a program is successfully meeting its goals and objectives. In many cases, these measures address project activities and services delivered. Some examples of performance measures might include the following: the number of clients served, changes in attitude, and rates of recidivism. These types of evaluations tend to measure the overall outcome of the projects. Effective treatment programs produce positive outcomes among clients. As would be expected, these programs generate client change while they participate in the program, and, in the most successful programs, client progress continues even after the client is discharged from a particular treatment regimen. Areas of evaluation that might be used to demonstrate outcome effectiveness might include any of the following: · 1. Cognitive ability (improvements in recall and/or overall
  • 12. testing scores or times) · 2. Emotional/affective functioning (such as anxiety and depression) · 3. Pro-social attitudes and/or values (such as improved empathy, honesty, etc.) · 4. Education and vocational training progress (traditional achievement tests) · 5. Behavior (evidenced by observable behaviors). Process evaluations focus on the implementation of the program and its day-to-day operations. Typically, process evaluations address specific processes or procedures that are routinely done within the agency. In many cases, process evaluation refers to assessment of the effects of the program on clients while they are in the program, making it possible to assess the institution’s intermediary goals. Process evaluation examines aspects of the program such as: · 1. The type of services provided · 2. The frequency of services provided · 3. Client attendance in individual or group counseling sessions · 4. The number of clients who are screened, admitted, reviewed, and discharged · 5. The percentage of clients who successfully complete treatment.Sex Offender Treatment Programs (SOTP): The Importance of Evaluation One type of treatment program and treatment population who warrants routine assessment and evaluation would be sex offender treatment programs and the clients of these programs. The evaluation of these programs is quite naturally important because sex offenders have generated a high level of public concern. Determining whether treatment programs do indeed “work” or whether they do not do so is paramount to determining whether this population should be given treatment in lieu of simple incarceration. Further, effective evaluation allows programs to improve their implementation. Due to public safety concerns associated with sex offenders, effective
  • 13. evaluation has become a very important element in designing treatment programs for these programs. Sex offender treatment programs entail a variety of approaches that are used to prevent convicted sex offenders from committing future sex offenses. Students should refer to Chapter 12 on sex offender treatment programs when considering the evaluation of such programs. As one may recall, these approaches include different types of therapy, community notification, and standardized assessments (Bureau of Justice Assistance, 2008). Given the high level of denial among sex offenders, it is important that assessment and evaluation components are able to measure both latent as well as manifest aspects of sex offender progress in treatment. In other words, the skilled evaluator will keep in mind that this population is inherently very manipulative and will need to ensure that their evaluation model is able to detect deceit and manipulation from data provided by these offenders. Evaluations for sex offender treatment programs in prison are likely to have some differences from those in the community, particularly since public safety concerns are greater for those who are in the community. While some scales and processes will remain the same in both settings, evaluators in community- based settings will also need to consult with family and friends of the sex offender much more frequently than in a prison setting. The reasons for this are simply because such individuals are likely to have more direct observations of the offender, their behavior, and their apparent commitment to the treatment regimen. Typically, there are three common therapeutic approaches to treating sex offenders. These approaches include (1) cognitive- behavioral approach, which focuses on changing thinking patterns related to sexual offending and changing deviant patterns of sexual behavior, (2) psychoeducational approach,
  • 14. which focuses on increasing offenders’ empathy for the victim while also teaching them to take responsibility for their sexual offenses, and (3) pharmacological approach, which uses medication to reduce sexual response. As one may recall in Chapter 12, the primary types of treatment are cognitive- behavioral in approach but many may use psychoeducational aspects as well. The pharmacological approach has not been discussed in this text and will generally not be an area of intervention that will require substantial input from the correctional counselor. It is for this reason that, when discussing evaluation, we focus our attention on efforts to evaluate cognitive-behavioral and psychoeducational interventions. Beyond the treatment staff, the supervision of sex offenders— and the evaluation of sex offender treatment programs—should include all parties who are involved with the case management of the sex offender, including law enforcement, corrections, victims (when appropriate), the court, and so on. All of these personnel can provide very useful information that may not be readily apparent to the evaluator. The key for the evaluator is to understand the one vantage point that each party provides from which he or she can view the sex offender treatment and/or supervision process. It is the composite picture, made up of the full range of individual observations, that should be used by the evaluator. Each party individually can provide valuable information in assessing the effectiveness and efficacy of the sex offender treatment program and supervision strategies (Bureau of Justice Assistance, 2008). Collectively, these parties provide a multifaceted view of the offender’s progress. Further, as was noted in Chapter 12, sex offenders are very manipulative, and even skilled therapists (and community supervision officers) may have difficulty discerning whether such an offender is making genuine and sincere progress. Because of this, it is important for the evaluator to get a
  • 15. comprehensive “snapshot” of the offender that is multidimensional in scope. The use of numerous observations and the comparison of those observations help to ferret out faulty data provided to the evaluator, whether the faulty data was provided deliberately (such as from the sex offender himself or herself) or accidentally/unknowingly from various personnel working with the offender. Naturally, the more comprehensive and the more accurate the evaluation, the more likely that agencies can refine their processes. Refined processes lead to more effective treatment and this then leads to increased public safety if the sex offender ceases recidivism due to effective treatment. Thus, the evaluator is a primary player in improving community safety through agency assistance in optimizing their service delivery. As with our earlier example of an evaluative design for a substance abuse treatment organization, the use of standardized assessment instruments with sex offenders can greatly improve the validity and reliability of the evaluation. Standardized tools are more effective than “home grown” surveys and questionnaires because, as we noted in the previous subsection, they have been tested to ensure that they are valid and reliable in providing treatment planning information for counselors and security criteria for correctional administrators and supervision staff. Thus, standardized assessment tools tend to increase the likelihood of treatment efficacy and also better identify sex offenders who are at a heightened risk of recidivism (Bureau of Justice Assistance, 2008). A more in-depth discussion on the use of standardized instruments in the evaluation process will be provided in part two of this chapter. For now, we simply wish to note their constructive use when conducting evaluations. Beyond the use of standardized data-gathering tools, evaluators tend to also address a number of specific areas of concern for publicly operated sex offender treatment programs. These areas of attention, as noted by the BJA (2008), include the following:
  • 16. · 1. Attrition in sex offender programs with the hope of increasing the number of offenders who complete treatment · 2. Identification of offense characteristics that predict treatment failure · 3. Development of processes to better track high-risk sex offenders · 4. Continual improvement of the validity and reliability of screening and assessment instruments that are used · 5. Improving interventions for specific categories of sex offenders to improve one-size-fits-all treatment orientations. When conducting evaluations of sex offender treatment programs, there are a number of program outcome measures that may be utilized. The program outcome measures noted below are among those that are more common and provide administrators with a general idea of what their program processes produce upon completion of the program: · 1. Proportion of reconvictions for sexual offenses · 2. Change in treatment motivation · 3. Change in treatment engagement · 4. Increase in offender emotional health or adjustment · 5. Decrease in pro-offending attitudes · 6. Decrease in inappropriate sexual drive · 7. Decrease in aberrant sexual arousal and sexual fantasies. In addition, process measures provide an understanding of the day-to-day operations of the treatment program. These types of measures aid clinical supervisors and agency administrators in determining specific areas of treatment that work well while identifying those areas that need some type of modification or improvement. Some of the common process measures examined include the following: · 1. Number of face-to-face contacts between treatment provider and sex offender · 2. Number of meetings between the sex offender, therapist, and probation officer
  • 17. · 3. Number of visits by probation officers to the home of the sex offender · 4. Number of urine screenings for drugs/alcohol · 5. Number of medication-induced side effects · 6. Level of community supervision received. Lastly, the BJA (2008) has noted that there are numerous sex offender studies with different methodological problems such as small sample sizes, the lack of equivalence among control and experimental groups, and the use of low quality assessment scales. Despite this, some sex offender studies have provided evidence that suggests that treatment programs used today are more effective than those used in the 1980s and 1990s. Of interest is the fact that evaluations that have compared different therapeutic approaches have consistently demonstrated that cognitive-behavioral treatment approaches hold particular promise for reducing sex offender recidivism (Bureau of Justice Assistance, 2008). As discussed in Chapter 12, cognitive-behavioral treatment with sex offenders is often provided in a group setting that focuses on cognitive distortions, denial of the offense while in treatment, deviant sexual thoughts and arousal, and a lack of empathy for victims. These programs lend themselves well to evaluation due to their clear processes of implementation and the ease by which those processes can be defined and quantified for research purposes. However, the ultimate litmus test of success is whether the sex offender recidivates, particularly through the commission of another sex offense. It is in this regard that cognitive-behavioral programs tend to demonstrate very good program outcome results because these programs tend to have more frequent and more significant reductions in recidivism than most other interventions that exist.SECTION SUMMARY Evaluative research is very important to treatment agencies since it is this process (and this process alone) that allows
  • 18. correctional counseling programs to operate as evidence-based programs. The use of internal evaluation is what ensures that counseling processes are in a state of continued refinement and improvement. This means that the evaluator, in many respects, must act in an independent fashion when conducting data collection and the research that will evaluate the agency. Likewise, the ideal evaluator is one who not only has sufficient credentials in research and statistical analysis but also has experience and expertise with the specific type of treatment program that is being evaluated. This will ensure that the evaluator will have a good contextual understanding of the dynamics within the agency and/or the challenges that tend to be encountered in a given area of treatment service. In addition, the evaluator should strive to have a cordial and warm rapport with agency staff, but it is their task to operate in a neutral and detached manner when determining quantitative outcomes for the agency. When designing the evaluation plan, five key elements should be addressed. These five elements are as follows: (1) determining the target audience for the evaluation and the dissemination of its results; (2) identifying the evaluation questions that should be asked; (3) determining how the evaluation design will be developed; (4) deciding the type of data to be collected, how that data will be collected, and by whom; and (5) articulating the final products of the report that will be produced. This last element is what will be most important to the treatment program or facility since this will be the document that will determine whether the agency is viewed as a success or a failure (or neither). Lastly, evaluators must provide measures for both processes and outcomes within the agency. Process measures are related to the day-to-day operations within the agency, such as techniques used in group therapy, number of sessions provided, or number of weeks that the client is in treatment. Outcome measures
  • 19. examine the final product once the program has been completed and might include the behavior of the client, emotional stability of the client, or a client’s educational achievement while in the treatment program. In addition, an example of an evaluation project for a drug treatment program and for a sex offender treatment program were discussed. These examples demonstrated several key aspects of evaluation, such as the use of standardized instruments (discussed in more detail in part two of this chapter), the use of outcome and process measures in evaluation, and the need for treatment and evaluative personnel to work in a collaborative fashion. Lastly, drug treatment is one of the most often encountered forms of treatment provided within the correctional setting while sex offenders are one of the most manipulative offenders whom correctional counselors will encounter. It is for these reasons that examples were provided for the evaluation of programs addressing these types of clinical challenges.LEARNING CHECK 1. Cognitive behavioral approaches have great deal of empirical research that supports their effectiveness with sex offenders. · a.True · b.False 2. Outcome measures examine the day-to-day operations of treatment programs. · a.True · b.False 3. Direct observation, interviews, surveys and questionnaires, and official records are the four primary means by which data are collected for evaluation projects.
  • 20. · a.True · b.False 4. The Addiction Severity Index (ASI) is commonly used in treatment facilities all over the United States. · a.True · b.False 5. Change in treatment motivation has been identified as a program outcome useful for many sex offender treatment programs. · a.True · b.FalsePART TWO: CONSIDERATIONS IN FORMING THE EVALUATIVE DESIGN The specific approach that a researcher may use to evaluate an agency may depend on a number of different factors. The needs of the agency, required reporting to grant funding agencies, ethical limitations, financial limitations with the research, process and outcome considerations, and feasibility of completing the research may all prove to be important factors in formulating the ultimate evaluative design. These initial considerations are very important and they will be instrumental in determining the appropriate approach in evaluation. Further, for many treatment programs (particularly those that are grant funded), the results of research projects can be very important in determining if programs continue to exist. Consider, as an example, that research related to the effectiveness of juvenile boot camp programs has tended to show that juvenile boot camp programs do not provide long-lasting changes in behavior of delinquent youth. These youth, once released, still tend to
  • 21. return to their criminal behaviors once they are returned to their old environments. When such findings emerge, questions related to the accuracy of the results may also be generated. This is also just as true when we find that programs work exceptionally well. In such cases, we must be able to clearly demonstrate that our findings have been produced by the phenomenon that we believe have served as the causal factors. Consider again our example of the juvenile boot camp observation. How do we know if it is the structure of the juvenile boot camp intervention that is flawed? Could it be that juvenile boot camps are well designed and successful but some other spurious factors were causing recidivism among these youth? How do we determine and distinguish between these different potential explanations for juvenile recidivism after finishing a boot camp program? Answers to these questions can only be provided if we ensure that two primary constructs exist within our research. These constructs are known as validity and reliability.Validity in Evaluative Research Validity describes whether an instrument actually measures the construct or factor that we have intended to measure. For many students, it may seem strange that one could not know if they are measuring what they intend to measure; however, the mental health and counseling fields often are tasked with measuring concepts that cannot be readily and physically seen. For instance, the measurement of attitudes may be quite difficult, particularly if a client is deliberately being deceptive. In addition, some clinical disorders may consist of symptoms that also exist with other disorders, thereby making it difficult to distinguish the disorder that is actually being measured. Further, some disorders may frequently coexist with other types of disorders, being so commonly connected that medications prescribed for one may be similar or identical to those
  • 22. prescribed for the other. An example of this would be the disorders of anxiety and depression. In many cases, psychiatrists may prescribe identical medications for both disorders. Further, it is frequent for persons with one of these disorders to also present with the other. Distinguishing whether a client engages in a behavior due to anxiety responses or depressive/affective responses may be important from a clinical perspective. Therefore, whatever measure the treatment program use it is important that it correctly and accurately discern between these two disorders if the desire is to optimize treatment outcomes. Though these two disorders may coexist, they are actually quite different from one another and individualized treatment plans must correctly distinguish between such clinical nuances if effective treatment outcomes are to be expected. Thus, the process used to distinguish between disorders must be valid; it must correctly measure the correct disorder that it is intended to measure without convoluted outcomes, thereby correctly providing for clinical diagnoses. This type of clinical example can become even more important and even more complicated when other constructs, such as low self-esteem, are also added into the therapeutic equation. Indeed, many persons with low self-esteem suffer from either minor depression, anxiety, or both. The question then becomes “what is first, the low self-esteem followed by depression and/or anxiety or the existence of depression and/or anxiety with corresponding low self-esteem?” In order to correctly answer this question, one must be able to correctly identify between both clinical disorders as well as the general construct of low self-esteem. Only a valid measure will be able to do this. What is more, this measure must be very sensitive to underlying differences between disorders and constructs that have many latent interconnections; this further complicates the ability to achieve valid measurements but also demonstrates why this is all the more important. In theory, if you address the primary
  • 23. issue first, the other issues will tend to also subside on an exponential basis. Though there are many more examples of clinical and nonclinical situations where invalid measures may be mistakenly used by researchers, we provide this example to demonstrate the complexity associated with distinguishing valid results in correctional treatment. We also provide this example to demonstrate why it is so important to correctly discern among various disorders and behavioral constructs. This is even more critical to public safety when behavioral symptoms include violent and/or medically risky behaviors. Therefore, it is important that evaluators of mental health programs ensure that their measures are valid and it is important for clinicians being evaluated to remain receptive to the requests of evaluators to provide exacting and detailed specificity as to observed symptoms, clinical impressions, and other aspects that the counselor may use to generate his or her own clinical judgments in treatment.Reliability in Evaluative Research Reliability is a concept that describes the accuracy of a measure which in turn describes the accuracy of a study. As an example, consider again an evaluation where measurements of client anxiety are taken. A reliable measure would provide a measure that accurately reflects the level of anxiety and this measure would consistently be provided over time and throughout multiple measures if interventions were not provided. This measure is reliable when it reflects the true level of anxiety that the client experiences accurately and on a consistent basis. The ability to gauge the level or intensity of a mental health symptom (such as anxiety) correctly and consistently over multiple measurement points makes a process reliable. It is important to clarify that the consistent reporting of results, in and of itself, is not the only consideration in determining reliability. Rather, it is also the ability to provide a measure that also correctly determines the modulation of that symptom.
  • 24. For example, a measure may consistently demonstrate that a client has low levels of anxiety when, in fact, they suffer from high levels of anxiety. Since the person does, in fact, suffer from anxiety this measure is valid; it is expected that anxiety is being measured and the instrument does indeed measure symptoms of anxiety. However, the instrument is not reliable because it consistently provides a measure that underrates the level of anxiety that the client consistently experiences. Consistently inaccurate measures cannot be considered reliable. Validity and reliability are absolutely critical to conducting evaluative research; without them the research is essentially useless. Research in the field of correctional counseling is particularly important due to the implications that may emerge related to public safety and the continuation of programs. Therefore, the role of evaluators in treatment programs is one that is very important, both within the lone treatment facility and when making determinations for the funding of programs throughout a state or the nation. But the question then emerges, how do we ensure that the outcomes that are produced are, in fact, valid and reliable? One effective means of obtaining valid and reliable data would be to use standardized instruments that have been specifically designed to ensure that client information meets acceptable criteria with both constructs.The Basics of Standardized Treatment Planning and Risk Assessment Instruments As has been noted, the use of standardized instruments can add strength to any evaluation design. These instruments have been tested through a variety of processes and statistical analyses to ensure their validity and reliability, when properly used. It is the last part of the prior sentence—when properly used—that is important to note for correctional counselors. Many counselors who have the traditional graduate level education in counseling (this includes correctional counselors) will tend to have only one course that deals specifically with testing and assessment. Further, these programs often only require one class in research
  • 25. methods and, as is customary among counseling programs throughout the United States, there will be no specific course in statistics. This is because many counseling programs are designed to train therapists, not researchers. On the other hand, the field of psychology tends to consistently require at least one research methods course, a separate statistics course, and will also have at least one (or more) courses in testing and assessment. Even with this increased emphasis on statistics and testing processes, persons with only a master’s degree in psychology are not able to practice without obtaining some sort of supervision from a Ph.D. level psychologist. This is despite the fact that counselors with master’s degree in counseling as well as advanced internships and practicum are licensed to conduct therapeutic services. These counselors are typically not qualified to conduct psychological testing on their own without additional training and, even then, there are limits to the types of tests that they may legally administer. For laypersons and for paraprofessionals, the training in testing is even less than what is obtained by licensed counselors. In some treatment settings, paraprofessionals may conduct the majority of the day-to-day work, and they may even be required to read and utilize the results from standardized tests when performing their job. Naturally, these persons are not able to administer, score, or interpret such tests. They typically will simply use the results from an appraisal or evaluative specialist as a tool in treatment planning. The reason for describing the credentials involved with the use of standardized tests is to demonstrate that few mental health professionals are able to administer, score, and interpret these tests without a doctoral level education. Further, many correctional treatment settings do not have full-time clinical psychologists and/or counselors who are qualified to conduct
  • 26. test administration. Thus, correctional counselors tend to not be well grounded in an understanding of the basic characteristics of a sound and empirically designed standardized instrument, particularly one with psychometric properties. This is an important point to note and this is precisely why we have included a brief overview of those characteristics of a valid and reliable testing mechanism. Before proceeding further, students should understand that standardized tests tend to be used for two key purposes in correctional counseling: treatment planning and security classification. As has been noted in earlier chapters (specifically Chapters 1 through 3), correctional counselors must not only attend to therapeutic concerns of offenders who are clients, but they must also consider public safety when determining the prognosis of their clients. In other words, they must be concerned as to whether their clients will cause additional harm in society once they are released from a correctional facility and/or from community supervision. Because of this, correctional counselors will sometimes deal with standardized assessment tools that serve both a treatment planning and a security classification purpose. Thus, it is useful for correctional counselors (and especially treatment evaluators) to understand some of the common principles associated with standardized treatment planning and classification instruments. A failure to understand these basic statistical and/or methodological considerations can lead to the misuse of these instruments among clinicians. James Austin (2006) provides six basic suggestions for correctional treatment professionals who may wish to know whether their instruments are effective. Many of Austin’s comments have to do with the methodology that was used to construct the testing instrument, which then relates to the validity and reliability of that given instrument. Thus, knowing these basic concepts can help correctional counselors to ensure that instruments that they use
  • 27. and/or integrate into their treatment planning are appropriate and this also can ensure that correctional counselors use those instruments appropriately in their day-to-day operation. According to Austin (2006), the following points should be considered when utilizing standardized form for treatment planning, classification, and/or evaluative purposes: · 1.Selected Standardized Instruments Must Be Tested on Your Correctional Population and Separately Normed for Males and Females. Austin (2006) notes that when assessment tools are tested on the offender populations in one area of the nation, they may not be as relevant to offenders in another area. For example, consider the state of California as compared to the state of Nebraska. It is likely that the offender populations in each state will differ, one from the other. Because of this, treatment programs and treatment program evaluators should use instruments that are essentially normed on—or tailored to— the characteristics of offender populations that are similar to those that they work with. Austin (2006) points out that “in research terms this issue has to do with the ‘external validity’ of the instrument and the ability to generalize the findings of a single study of the instrument to other jurisdictions” (p. 1). Therefore, if an instrument is normed on an offender population that is substantially different from the one that the evaluator is assessing, it is likely that the assessment and the evaluation outcomes will not be as accurate (Hanser, 2009). Further, male and female offenders differ in both their treatment needs and security concerns. Characteristics associated with criminal behavior and prognoses for treatment tend to differ between male and female offenders (Hanser, 2009). Because of this, standardized instruments should be different for male and female offenders or instruments should have built-in mechanisms that are designed to differentiate between both populations; but in many cases separate instruments are not used and typically used instruments do not sufficiently differentiate between the needs of male and female offenders. To be reliable, assessment tools must give appropriate weight to
  • 28. gender differences among offenders, both in treatment planning and in the evaluative process (Hanser, 2009). Austin (2006) comments further that “recidivism and career criminal studies consistently show that females are less involved in criminal behavior, are less likely to commit violent crimes and are less likely to recidivate after being placed on probation or parole” (p. 1). · 2.Interrater Reliability Tests Must Be Conducted with Instruments that Are Selected.Austin (2006) states that both an interraterreliability test and validity test must be completed by independent researchers prior to using a test for treatment planning, assessment, or evaluation. Further, these reliability and validity safeguards should be assured by researchers who accrue no monetary or political benefit when determining whether a standardized test is reliable and/or valid (Austin, 2006; Hanser, 2009). In simple terms, interrater reliability has to do with the consistency of the results that are obtained from an instrument. Interrater reliability should consistently yield the same outcomes regardless of the person who has conducted the test of the instrument (Hanser, 2009). This is very important for evaluative research and resounds the points made earlier in our previous subsection regarding reliability in the evaluation design. · 3.A Validity Test Must Be Conducted. As with evaluative designs, the instruments used in those designs must also be valid. As has been explained earlier, validity ensures that the instrument is actually measuring what the evaluator and/or correctional counselor believe is being measured. As we noted in our example with valid measures of anxiety (see our earlier subsection), instruments can provide measures that correlate with a given issue but the cause of that correlation may be due to some unknown factor (Hanser, 2009). · 4.The Instruments Must Allow for Dynamic and Static Risk Factors. Students should recall from Chapter 3 the distinctions between dynamic and static risk factors. Dynamic risk factors include characteristics such as age, marital status, and custody
  • 29. level (Hanser, 2006, 2009). The key commonality among dynamic risk factors is that they can and do change over time. Static risk factors include characteristics such as age at first arrest, crime seriousness, and prior convictions. Once established, these characteristics do not fluctuate over time (Hanser, 2006, 2009). Both of these factors are important for treatment planning while the offender is on supervision, risk prediction during release from incarceration, and in evaluating offender outcomes in treatment programs. For example, one author of this text who is also an independent evaluator for a drug treatment center for female offenders sought to determine if age had a significant correlation with various aspects of treatment success. In this case, a dynamic risk factor was utilized to analyze offender outcomes. In addition, this same evaluator sought to determine if the number of prior convictions was significantly correlated with treatment success; this is an example where a static risk factor was used to evaluate client treatment outcomes. · 5.Instruments Must Be Compatible with the Skill Level of Treatment Staff. As was discussed earlier, different treatment staff will tend to have different levels of credentialing (i.e., laypersons, paraprofessionals, counselors and psychologists with master’s degrees, counselors with doctorate degrees and specific training in psychometrics, and clinical psychologists with doctorate degrees). The level of credential can be important since this determines whether a person may be qualified to administer a specific test. Indeed, the accuracy of an assessment instrument can be just as dependent upon the skill of the person administering the tool as is its construction. It is not enough for a clinician and/or evaluator to use a well- developed instrument, but they must also have sufficient training in statistical analysis, research design, and testing processes and they must have adequate training before they can properly administer many standardized tests. Naturally, some tests are more complicated than others and it is because of this that different tests may require different levels of credentialed
  • 30. qualifications. In addition, evaluators must have experience administering those instruments or instruments similar to those that they use. Training or education alone is not sufficient; there is simply no replacement for the skill and familiarity that is acquired through the process of repetitive administration of a given instrument. The importance of these qualifications cannot be overstated. Further, many evaluative efforts may not always include standardized instruments as they can be costly to purchase, they may entail high costs in obtaining qualified personnel, and the process can be complicated and demanding. However, these costs and drawbacks do not offset the value that is added to an evaluative design for those agencies who truly wish to improve their service delivery and the treatment outcomes of clients in their programs. The importance of professional qualifications is often evidenced by the fact that companies such as Western Psychological Services (WPS) and Psychological Assessment Resources (PAR), two well-known companies that copyright and sell standardized instruments, require persons ordering such instruments to provide proof of their credentials, training, and/or experience with similar instruments. · 6.The Assessment Instrument Must Have Face Validity. Lastly, the instrument and the process of assessment must be understood and recognized as credible by treatment staff and clients of the program that is being evaluated. Indeed, instruments that are only understood by academics will not be widely accepted by most treatment staff and such instruments can often confuse offenders who, in many cases, do not have well-developed reading skills. Further, if the instrument is perceived as being too “bookish” in nature and not applicable to the realities of the “street,” so to speak, clients are likely to view the instrument as artificial and sterile, not really being able to probe the true reality of what an offender may (or may not) experience (Hanser, 2009). With this in mind, students should understand that a lack of “face validity” means that the instrument is not recognized as valid on its face, or at initial
  • 31. glance, by those who judge its ability to assess or appraise a set of characteristics (Hanser, 2009).Ethics in Evaluation Ethics refers to what is right and wrong in relation to human conduct. This is a vital component to any research endeavor and should be taken seriously. At no time should human subjects be placed in undue harm while attempting to carry out a research project. One of the best ways to ensure ethical standards is to be open and honest with participants. Each component of the research design should be clearly explained to all participants. And, participants should be given the opportunity to freely choose whether to consent or refuse to participate in the study. In addition, great care should be taken to ensure that the identity of each participant remain anonymous. Three ethical principles were established by the Department of Health, Education, and Welfare in 1979 aimed at protecting human subjects and eliminating human rights violations: · 1. Respect for persons—treating persons as autonomous agents and protecting those with diminished autonomy; · 2. Beneficence—minimizing possible harms and maximizing benefits; · 3. Justice—distributing benefits and risks of research fairly (Schutt, 2006, p. 81). All research proposals should be reviewed by the appropriate Institutional Review Board (IRB). The primary purpose of the IRB is to ensure that ethical standards clearly resonate in all facets of the proposal and risk to human subjects is minimal. Especially, when conducting human subject research, IRB approval is critical. In fact, some research projects may require IRB approval from multiple agencies. In addition, we strongly recommend that students visit the APA’s website on “Ethical Principles of Psychologists and Code of Conduct.” In particular, evaluators should take heed of Section 8 on “Research and Publication,” which notes that participants (particularly agency clients in treatment) informed consent must be provided. The following is list of points paraphrased from requirements noted
  • 32. by the American Psychiatric Association (2009) that should be communicated to clients in treatment who are part of the evaluation process: · 1. The purpose of the evaluation, the procedures involved, and the duration of the evaluative process · 2. The voluntary nature of participation in the research and their right to cease participation at any time that they desire · 3. Any potential consequences of declining or withdrawing · 4. Possible risks, discomfort, or adverse effects involved (if any) with participation · 5. Potential benefits to the client and/or the agency that the evaluative research might produce · 6. The general limits of confidentiality (students should refer back to Chapter 2 for additional information on confidentiality) · 7. Any incentives provided to get clients to participate · 8. Information on their rights and notice of a contact person to who questions can be directed regarding the evaluation process.Reviewing Evaluation Findings Once the evaluator has designed and implemented the evaluation process within a treatment agency, it is not enough for that person to simply “crunch numbers” and provide statistical reports. Rather, they must communicate the outcome of the evaluation and provide feedback and/or suggestions to treatment personnel so that they can refine their techniques and approach. Creation of this feedback loop is critical; without it, the evaluation simply sits stale and useless within the treatment agency. Because evaluators must interpret and explain their findings, it is important for the evaluator to have worked as treatment provider, if at all possible. This allows the evaluator to understand the nuances and unspoken complications in providing therapeutic services. Without such insight, evaluators are limited to a one dimensional understanding of the treatment process, being restricted to the limitations of their data when interpreting results. Beyond the process of collecting data and conducting analyses,
  • 33. evaluators are often trusted by treatment programs to provide interpretations and to produce conclusions resulting from their analysis. Along with this, evaluators may provide recommendations that are based on the findings. The evaluator, in providing such recommendations, will usually discuss the outcome with agency supervisors. In such cases, correctional counselors would be well served to heed the information provided by evaluators since their analysis is likely to be free of the subjective impressions that counselors tend to form of clients and their clinical situation. This is not to say that, in all cases, the evaluator’s interpretation of treatment effectiveness is more accurate than the therapist’s who work in a given treatment facility. Rather, it is to say that the evaluator’s observations can serve as a good counterbalance to subjective observations of program staff. This is perhaps one of the best means by which clinicians can optimize their interventions and, in the process, establish their treatment program as being evidence-based in nature.Incorporating the Evaluation Research Findings into Therapy The primary goal of evaluation research is to enhance the services provided to offenders. We need to know what is working and what types of interventions are able to enact meaningful change and help keep offenders out of future contact with the criminal justice system. This is a critical component for creating and maintaining credibility of the counseling profession in working with offenders. Criminal justice is a discipline that frequently sees the theoretical pendulum swing from tougher incarceration policies to those more focused on rehabilitation and counseling. In order for counseling to remain viable we need to strive toward implementing practices that are theoretically sound and able to adapt to the peculiarities of individuals within the offender population and their particular needs. Relapse and recidivism are concepts that generally represent different disciplines but are inextricably connected. In counseling we use relapse to signify an individual’s
  • 34. reengagement in problem behavior. In criminal justice we use recidivism to describe the process of committing a criminal act that brings an individual back into the justice system. From the perspective of correctional counseling these concepts are best viewed as part of a singular process, meaning that, generally, offenders who recidivate are going to be offenders who have also relapsed into some type of problem behavior. Indeed, further proof of the interchangeable nature of these terms is seen in recent grant Requests for Proposals (RFPs) released by SAMHSA, where specific grant projects call for programs that simultaneously address substance abuse relapse and criminal recidivism. Correctional counselors will eventually select a style of counseling that most suits their own personality and expertise. The selected style of counseling should be one that allows each counselor to operate from his or her authentic self. In addition to each counselor’s individual knowledge of his or her particular therapeutic modality it is very important that counselors listen to offenders as they share their own reasons for relapse and recidivism. The offender’s self-reported reasons for engaging in the behavior that led to his or her arrest is rich information for the counselor to explore. It may be that there are intricacies within a story that are unique to an offender and require specialized interventions that aim to reframe cognitions and alter behavior. Self-reported data also provide a good source of validating information that may have been captured in standardized instruments used by many facilities at intake. Common standardized assessment instruments measure an offender’s levels of depression, anxiety, and trauma. These initial assessment instruments and self-report data usually provide a baseline from which subsequent counseling services can be gauged in regard to whether an offender’s psychological and emotional outlook is improving (Figure 14.1).Creating a Feedback Loop in Therapy The process of refining one’s method of counseling should be
  • 35. constant. Much of the refinement should be based on both quantitative and qualitative information gained from the process of interacting with offenders and delivering treatment. When the data collection process adheres to acceptable standards of scientific investigation, the data produced should be relied upon heavily to “drive” future counseling sessions. In essence, the entire process of counseling offenders is best viewed as a circular phenomenon that mirrors the process of scientific inquiry. We begin with a distressed offender and begin the attempt to understand the particulars of the distress. We then proceed to the implementation of counseling techniques in an effort to reduce the distress. During this process we are constantly evaluating whether the treatment is effective. If the offender shows signs of improvement based on an intervention we will likely continue with subsequent application. If the offender does not seem to be responding well, or improving, it may be that we need to adjust our methods of intervention and then reassess after a reasonable period of time. This process continues until the offender is deemed suitable to proceed without further treatment.FIGURE 14.1 The Means by which Data Collection and Evaluation Create Feedback Loops that Impact Agency Interventions. Improving Therapy: A Final Note The best counselors are personally congruent; they are authentic and provide realness in which discussions and disclosures are meaningful. Counselors who are not authentic will likely hide behind the delivery of scripted techniques and sanitized disclosure incapable of prompting genuine exchange able to heal old wounds. Counselors must be aware of their own psychological and emotional needs. Our own ability to attend to these needs in professional settings models our ability and willingness to make changes and can be very beneficial to offenders. Change is frightening for all human beings. But, imagine the level of trepidation for those offenders who have never had the opportunity to observe another person take the risk of disclosing personal information in hopes of a better life.
  • 36. Counselors have the opportunity to be meaningful change agents for many of the offenders they encounter. Whether the change will be meaningful and lasting, however, will in large part hinge on the counselor’s own psychological and emotional depth. This is precisely why counselors should take every opportunity to engage in training aimed at enhancing their own self- understanding. A guiding question that should always be on the mind of counselors is: “Would I be willing to do what I am asking the offender to do?” Indeed, the process of obtaining continued education is one that is mandated by most all ethical governing bodies within the counseling field. This is because the field of counseling (including correctional counseling) is always changing and improving. Therefore, when correctional counselors pursue further education throughout their careers, they are the benefactors of evaluative research that determines those approaches that “work” from those that do not. This is a continual improvement process where one utilizes an approach, tests that approach, gets results from the test of the approach, and based on the findings modifies future intervention approaches. Simply put, counselors must make a point to stay abreast of such research and to grow along with their discipline. To fail to do so produces a serious shortcoming in their competency to provide services and also shows professionally negligence. Further still, this failure would also be a failure to our client’s welfare. Thus research is important since it guides us on how our field and our own individual careers should develop. In essence, we are all a work in progress and the best treatment professional is one who knows that they never stop growing, both personally and professionally. To fail to do so would essentially mean that we have decided to stop caring. Nothing could be more contradictory to the spirit, point, and purpose of the counseling profession.SECTION SUMMARY When conducting evaluative research, there are a number of issues to consider prior to starting the actual evaluation. First
  • 37. and foremost, the evaluator must consider issues related to the validity and reliability of the research that is conducted. Without addressing these two important concepts, the evaluation of the treatment program is likely to have no useful outcome. One way to facilitate valid and reliable data collection is to use standardized instruments. Gaining data from clients and staff through the use of standardized instruments can ensure that at least a minimal degree of validity and reliability is inherent to the data that is obtained. However, the simple use of these instruments does not, in and of itself, ensure that the evaluation will automatically be successful. The evaluator and relevant agency staff must be trained on the use of these instruments. If these instruments are not used properly, the evaluation will consist of essentially useless information. Further, ethics in research should be given a priority, particularly in regard to the boundaries of confidentiality, ensuring that clients have informed consent prior to participating in the evaluation process. Once the evaluator has considered the validity and reliability of the evaluation design and once they have ensured that ethical safeguards are in place, they should proceed with the evaluative process. When completing the evaluation, they should provide feedback to treatment staff (particularly supervisory clinical staff) to disseminate the results of their findings. Further, evaluators should work with treatment staff and administrators to integrate findings within the processes of the agency’s day-to-day operations. It is in this manner that feedback loops are built so that the evaluative process can further aid and support the continual refinement of treatment interventions.LEARNING CHECK 1. Relapse and recidivism are two concepts that should not be considered related. · a.True
  • 38. · b.False 2. Validity is the ability to get consistent measurements. · a.True · b.False 3. Reliability describes the accuracy of a measure. · a.True · b.False 4. The primary goal of evaluation research is to refine treatment program efforts aimed at rehabilitating offenders. · a.True · b.False 5. It is not necessary for correctional counselors to understand evaluation research. · a.True · b.FalseCONCLUSION Research and assessment of correctional counseling programs is vital. It is through this process that we are able to identify program strengths and weaknesses that serve to inform the literature. It is also through the evaluative process that we are able to determine if our programs actually work to improve relapse and recidivism rates among offenders. Afterall, if these programs simply “feel good” but, in reality, provide little actual and observable benefit to society in general and the offender in
  • 39. particular, their usefulness is questionable. It is important that agencies engage in earnest and sincere evaluation and that the use of evidence-based approaches is emphasized. By being evidence based, agencies provide means of demonstrating their positive impact on society and, due to the evidence that they produce, provide the means by which other agencies can replicate their practices. It is important for correctional counselors to understand the importance of evaluative research and to understand that the role of the evaluator is one that is helpful. Indeed, the best evaluator is one who has also worked in the treatment field, particularly in the same field that is being subjected to their evaluation. Such evaluators usually are more in tune with the processes that they evaluate and they are also better able to interpret and explain outcomes that are observed. Such evaluators also tend to be effective in explaining their results to agency staff and demonstrating how future interventions can be optimized. Further, it is important that evaluation designs ensure for both validity and reliability. Where validity ensures that one is measuring what one intends to measure, reliability ensures that the measure is accurate in intensity and/or degree of measure and that the measurement consistently provides these accurate measures over time. In the field of correctional counseling, issues that are evaluated require that specific attention is given to the validity and reliability of the evaluation process. The use of standardized instruments helps to facilitate this process since they have been tested for their ability to provide valid and reliable data. Presuming the evaluator ensures that appropriate methodological principles are used, evaluations that use standardized instruments will typically be superior to those that do not. Lastly, ethics in research should be maintained by the evaluator.
  • 40. Just as with correctional counselors, the issue of confidentiality is important. Clients should be provided full consent as to the nature of the study and their rights when participating in research. Though clients will have likely been apprised of their rights to confidentiality during their initial entry into the treatment program, research evaluators should also cover these parameters with clients to ensure that they understand their role, the nature of the research, and their own right to autonomy. This is an important issue, particularly in cases where clients are court mandated. Beyond the participation of clients, agency staff should be encouraged to participate. In such cases, evaluators can integrate information from staff to provide a more multifaceted appraisal of the processes involved within the treatment facility. Further, staff will ultimately be participants and recipients of the evaluative output since agencies will usually find it necessary to consider changes and modifications to their programs as evaluations of their effectiveness are provided. It is in this manner, through the incorporation of evaluative data, that agencies can continually refine and improve their services and become evidence-based treatment providers in the truest sense of the term.Essay Questions · 1. Why is evaluative research important to improving correctional counseling processes? · 2. Discuss the purpose of evaluation research. What might be some consequences of not conducting evaluation research? · 3. Why are standardized instruments considered particularly valuable in evaluative research? What are some necessary characteristics of standardized assessment tools? · 4. Discuss the various ethical principles related to conducting research with offenders. What are some of the recommendations noted by the American Psychological Association?Treatment Planning Exercise For this exercise, you will need to consider your readings in this chapter as they apply to prior readings from Chapter 8 on
  • 41. Substance Abuse Counseling and Co-occurring Disorders and from Chapter 9 on Youth Counseling and Juvenile Offenders. Your assignment is as follows: You are a researcher and a correctional counselor who has recently been hired by the community supervision system in your area. You have been asked to design and evaluate a treatment program for adolescent substance abusers that has been implemented within one of the larger cities in your state. Specifically, you are asked to examine how various aspects of social learning theory may lead to learned substance abuse within families of origin and within juvenile peer groups. With this in mind, you must then explain how various treatment options might best address domestic battering issues with this population. The program that you will evaluate uses all of the interventions listed in Chapter 8 and you are free to select any theoretical orientation that you desire from Chapters 5, 6, or 7 of this text. Lastly, you will need to provide a clear methodology for testing and evaluating your proposed program, including such factors as validity and reliability of your study as well as the validity and reliability of your assessment instruments (if any), the use of control and experimental groups, as well as ethical issues that might be involved with conducting such research.Bibliography American Psychiatric Association (2000). Diagnostic and statistical manual of mental disorders. Arlington, VA: American Psychiatric Association. Austin, J. (2006). How much risk can we take? The misuse of risk assessment in corrections. Federal Probation, 20(2). Retrieved from: http://www.uscourts.gov/fedprob/September_2006/risk.ht ml#basics. Belenko, S. (2001). Research on Drug Courts: A Critical Review. 2001 Update. New York: National Center on Addiction
  • 42. and Substance Abuse. Retrieved from: www.drugpolicy.org/docUploads/2001drug-courts.pdf. Bureau of Justice Assistance. (2008). Center for program evaluation and performance measurement. Washington, DC: Bureau of Justice Assistance. Retrieved from: http://www.ojp.usdoj.gov/BJA/evaluation/index.html. Campbell, D. T., & Stanley, J. C. (1963). Experimental and Quasi-Experimental Designs for Research.Boston, MA: Houghton Mifflin Company. Center for Substance Abuse Treatment. (2005). Substance Abuse Treatment for Adults in the Criminal Justice System. Treatment Improvement Protocol (TIP) Series 44. DHHS Publication No. (SMA) 05-4056. Rockville, MD: Substance Abuse and Mental Health Services Administration. Dana, R. H., Behn, J. D., & Gonwa, T. (1992). A checklist for the examination of cultural competence in social service agencies. Research of Social Work Practice, 2, 220–233. Hanser, R. D. (2006). Special needs offenders in the community. Upper Saddle River, NJ: Prentice Hall. Hanser, R. D. (2009). Community corrections. Belmont, CA: Sage Publications. Lempert, R. O., & Visher, C. A. (Eds.). (1987). Randomized field experiments in criminal justice agencies: Workshop proceedings. Washington, DC: National Research Council. McCollister, K. E., & French, M. T. (2001). The economic cost of substance abuse treatment in criminal justice settings. Miami, FL: University of Miami. Retrieved from: www.amityfoundation.com/lib/libarch/CostPrisonTreatme
  • 43. nt.pdf. Mire, S. M., Forsyth, C., & Hanser, R. D. (2007). Jail diversion: Addressing the needs of offenders with mental illness and co- occurring disorders. Journal of Offender Rehabilitation, 45(1/2), 19–31. National Institute of Justice. (1992). Evaluating Drug Control and System Improvement Projects: Guidelines for Projects Supported by the Bureau of Justice Assistance. Schutt, R. K. (2006). Investigating the social world: The process and practice of research (5th ed.). Thousand Oaks, CA: Pine Forge Press. Skinner, H. (1995). Drug Abuse Screening Test. Toronto, Canada: Addiction Research Foundation.