SlideShare a Scribd company logo
1 of 89
A pilot evaluation of the Family Caregiver Support Program
Ya-Mei Chen a,*, Susan C. Hedrick b, Heather M. Young c
a School of Nursing, University of Washington, United States
b Health Services, School of Public Health, University of
Washington, Research Career Scientist, VA Medical Center,
United States
c University of Washington, Grace Phelps Distinguished
Professor and Director of Rural Health Research Development,
Oregon Health and Sciences University, United States
Evaluation and Program Planning 33 (2010) 113–119
A R T I C L E I N F O
Article history:
Received 26 November 2008
Received in revised form 30 July 2009
Accepted 8 August 2009
Keywords:
Family Caregiver Support Program
Program evaluation
Caregiver
Support services
A B S T R A C T
The purposes of this study were to evaluate a federal and state-
funded Family Caregiver Support
Program (FCSP) and explore what types of caregiver support
service are associated with what caregiver
outcomes. Information was obtained on a sample of 164
caregivers’ use of eleven different types of
support service. Descriptive and comparative analyses were
used to detect the differences between users
and nonusers of caregiver support services. Six measures
included were caregiving appraisal scale,
caregiving burden, caregiving mastery, caregiving satisfaction,
hour of care, and service satisfaction.
Using consulting and education services is associated with
lessening of subjective burden; using
financial support services is associated with more beneficial
caregiver appraisal, such as better caregiver
mastery. The findings are practical and helpful for future
caregiver service and program development
and evaluation and policy making for supporting caregivers. In
addition, the evaluation method
demonstrated in the study provided a simple and moderately
effective method for service agencies
which would like to evaluate their family caregiver support
services.
Published by Elsevier Ltd.
Contents lists available at ScienceDirect
Evaluation and Program Planning
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c
a t e / e v a l p r o g p l a n
1. Introduction
An estimated 52 million Americans function as informal
caregivers of ill or disabled individuals, and 23 percent (22.4
million) of U.S. households are caring for a relative or friend
who is
at least 50 years old (AARP, 2004; Coleman and Pandya, 2002).
One
fifth of all family members of seriously ill patients have to quit
work or make another major life change in order to provide
care,
and almost one third report the loss of their entire savings
(GAO,
1994). Furthermore, financial or other unmet needs may impede
caregivers’ ability to function effectively, both in their own
day-to-
day lives and in their role as an ongoing support system for
their
patients (Kristjanson, Atwood, & Degner, 1995; Tringali, 1986).
As a
result, the need to provide support to caregivers has gradually
gained societal attention, and many publicly and privately
funded
services have been developed to achieve this goal. The National
Family Caregiver Support Program, for example, authorizes
local
Area Agencies on Aging (AAAs) to provide caregivers with
various
support services, including caregiver training, respite care, and
supplemental services, among others. However, caregiver
support
services vary a great deal, and research findings regarding the
* Corresponding author at: Psychosocial & Community Health,
Box 357263,
University of Washington, Seattle, WA 98195, United States.
Tel.: +1 206 685 0819;
fax: +1 206 685 9551.
E-mail address: [email protected] (Y.-M. Chen).
0149-7189/$ – see front matter . Published by Elsevier Ltd.
doi:10.1016/j.evalprogplan.2009.08.002
effects of these services have shown mixed results. The research
has also shown the need for a uniform method of evaluating
caregiver support services. Feinberg and Newman (2006)
studied
administrators’ experiences of implementing the National
Family
Caregiver Support Program in all 50 states in the United States,
and
showed that there is still a great unevenness in services
programs
in different states. Because of this, they suggested that a
uniform
assessment and evaluation tool is necessary in order to better
provide services to family caregivers.
1.1. Background of the problem
Caregiver support services most commonly provide informa-
tion access, caregiver education and training, and respite and
supplemental services. Research findings regarding the effects
of
these services have shown mixed results. Some studies,
including
those with rigorous designs such as randomized and controlled
trials, showed caregiver support services either to have little or
no
impact on caregivers’ outcomes, or to be effective only for a
subgroup of the caregiver population. Other studies, however,
showed these services to be effective in different perspectives
in
supporting family caregivers (Brodaty, Green, & Koschera,
2003;
Burns, Nichols, Martindale-Adams, Graney, & Lummus, 2003;
Gallagher-Thompson et al., 2003; Lee & Cameron, 2004; Maas
et al., 2004; Newcomer, Yordi, DuNah, Fox, & Wilkinson,
1999;
Roberts et al., 1999; Toseland, Blanchard, & McCallion, 1995;
Zank
& Schacke, 2002). To prepare for the current study, we
completed a
mailto:[email protected]
http://www.sciencedirect.com/science/journal/01497189
http://dx.doi.org/10.1016/j.evalprogplan.2009.08.002
Table 1
Summary of 34 studies reviewed.
Intervention studied N Sig. positive
effects
No
effects
Adult day care/respite services 10 6 4
Caregiver training/counseling/support group 13 6 7
Supplemental services (i.e. meal delivery
service, transportation, homemaker,
or home aide care)
3 2 1
Coordinated program (i.e. all-inclusive
care for elderly, which contains more
than one of the three categories
described above)
8 3 5
References reviewed: (Berry et al., 1991; Brodaty et al., 2003;
Burns et al., 2003;
Chang, 1999; Coon et al., 2003; Cox, 1997; Edelman & Hughes,
1990; Fox et al.,
2000; Gallagher-Thompson et al., 2003; Gaugler et al., 2003a,b;
Gitlin et al., 2003;
Gottlieb & Johnson, 1993; Hepburn et al., 2001; Kemper, 1988;
Kosloski &
Montgomery, 1993; Kosloski & Montgomery, 1994; Krout,
1995; Lawton et al.,
1989; Maas et al., 2004; Miller et al., 1999; Montgomery &
Borgatta, 1989;
Montgomery et al., 1985; Newcomer et al., 1999; Newcomer et
al., 1999;
Quayhagen et al., 2000; Roberts et al., 1999; Toseland et al.,
1995; Toseland et al.,
2004; Toseland et al., 2001; Tourigny et al., 2004; Yordi et al.,
1997; Zank & Schacke,
2002; Zarit et al., 1998).
Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010)
113–119114
systematic review of 34 previous studies of caregiver interven-
tions. The studies reviewed were conducted between 1984 and
2004 and used either randomized controlled trials or quasi-
experimental controlled designs. Each of these studies investi-
gated one of four categories of services as shown in Table 1.
Our
review of these studies found that only a little more than half of
them showed services to result in any benefit for family
caregivers.
Our review also indicated that different services might be
associated with different caregiver outcomes. For example,
research studies that found a positive effect of caregiver
education
and training showed these services to increase caregiver effec-
tiveness in solving problems, improve caregiver feelings of
competence, and reduce caregivers’ subjective and objective
burdens (Brodaty et al., 2003; Burns et al., 2001, 2003; Chang,
1999; Coon, Thompson, Steffen, Sorocco, & Gallagher-
Thompson,
2003; Gallagher-Thompson et al., 2003; Gitlin et al., 2003;
Hepburn, Tornatore, Center, & Ostwald, 2001; Montoro-
Rodriguez,
Kosloski, & Montgomery, 2003; Quayhagen et al., 2000;
Toseland,
McCallion, Smith, & Banks, 2004; Toseland et al., 2001;
Weuve,
Boult, & Morishita, 2000). In regard to the effect of respite and
supplemental services, on the other hand, studies that found a
positive effect showed these services to decrease caregiver
stress;
decrease feelings of role overload, depression, burden, and time
commitment; and improve overall psychological well-being
(Berry, Zarit, & Rabatin, 1991; Cox, 1997; Gaugler et al.,
2003a,b; Gottlieb & Johnson, 1993; Krout, 1995; Montgomery
&
Borgatta, 1989; Okamoto, Murashima, & Saito, 1998; Zarit,
Stephens, Townsend, & Greene, 1998).
These results demonstrate the difficulty of evaluating
caregiver support services. Most evaluation tools used in
previous studies were likely to assess one particular aspect of
the services’ outcomes, such as caregiver burden, more than
other
outcomes, such as caregiver mastery. These methods of evalua-
tion may result in nonsignificant findings where the tool chosen
does not focus on the appropriate caregiver outcomes.
Therefore,
developing a uniform evaluation method that is broad enough to
cover multiple facets of caregiver outcomes is a challenging but
important task. In addition, understanding whether different
services relate to different caregiver outcomes, and which
services might best support particular caregiver outcomes, could
be very helpful for choosing or developing evaluation methods
and tools.
1.2. Purpose of study
The purposes of this study were twofold: the first purpose was
to test a simple evaluation method that would be easy for
service
agencies to adopt and that could be adopted on a wide scale.
The
second purpose was to determine whether different types of
caregiver support services are associated with different
caregiver
outcomes. We collaborated with Aging and Disability Services
(ADS) in Seattle, the local AAA, to achieve our purpose
through a
pilot study that evaluated a federal- and state-funded project,
the
Family Caregiver Support Program (FCSP), in King County in
Washington State. In this region, the FCSP provides various
caregiving support services including adult day care, in-home
respite, information services, and financial assistance to the
caregiver (ADS, 2003).
2. Methods
2.1. Design, setting, and participants
This study was a descriptive and one-time survey of caregivers
living in King County who were reported as having received
services from local service agencies of ADS’. Four local
agencies
agreed to send out an invitation letter and questionnaire to all
caregivers who had received FCSP-funded services between
2001
and 2003. The University of Washington Human Subjects
Division
approved this study.
2.2. Questionnaire development
The researchers assisted the FCSP team in selecting tools
appropriate for evaluating the FCSP. Several tools were selected
for
review, including the ‘‘Caregiver Appraisal Scale’’ (Lawton &
Brody,
1969), the ‘‘Subjective and Objective Burden Scale’’
(Montgomery,
Gonyea, & Hooyman, 1985), and the ‘‘Mastery Scale’’ (Pearlin
&
Schooler, 1978). To ensure the usefulness of the evaluation
tool,
and with the intention of selecting a tool on the basis of both
successful scientific evidence and hands-on experience, the
team
invited the four local caregiver service agencies who agreed to
send
out an invitation letter and questionnaire to all caregivers to
contribute their expertise. After thorough discussion, the team
selected the ‘‘Caregiver Appraisal Scale’’ (CAS) developed by
M.P.
Lawton and E.M. Brody (1965) for the appropriateness of its
language and its coverage of the broad scope of relevant
caregiver
experiences (Vitaliano, Young, & Russo, 1991). Other tools
reviewed target only a single facet of caregiving experiences,
and single-perspective tools were less adequate for the purposes
of
the current study. The FCSP is a program with multiple
components, which include various types of services, and it is
likely that caregivers’ experiences are multifaceted as well. The
agencies consulted further suggested reducing the length of the
CAS questionnaire in order to not overly stress caregivers.
Consequently, three subscales were chosen for use in the study:
‘‘Subjective Burden’’ (e.g., ‘‘Your health has suffered because
of the
care you must give to care receiver’’ or ‘‘Very tired as a result
of caring
for care receiver’’), ‘‘Caregiving Mastery’’ (e.g., ‘‘I can fit in
most of the
things I need to do in spite of the time taken by caring for care
receiver’’), and ‘‘Caregiving Satisfaction’’ (e.g., ‘‘Helping care
receiver
has made you feel closer to him/her’’ or ‘‘Care receiver shows
real
appreciation of what you do for her/him’’). The length of the
revised
Caregiver Appraisal Scale (hereafter referred to as ADS-CAS)
was
thereby reduced from 47 to 34 items, with 13 items, 12 items,
and
9 items each for the ‘‘Subjective Burden,’’ ‘‘Caregiving
Mastery,’’
and ‘‘Caregiving Satisfaction’’ subscales, respectively.
The two subscales of the CAS that were not used in this study
are ‘‘Impact of Caregiving’’ and ‘‘Cognitive Reappraisal.’’ The
former
Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010)
113–119 115
was excluded because of its high correlation with the
‘‘Subjective
Burden’’ subscale (Deeken, Taylor, Mangan, Yabroff, &
Ingham,
2003; Lawton, Kleban, Moss, Rovine, & Glicksman, 1989). The
latter
was excluded as not reflecting the purposes of the FCSP.
Participants responded to each item on the ADS-CAS based on a
5-point scale, from ‘‘Rarely or never (1)’’ to ‘‘Most of the time
(5).’’
Higher total and subscale scores represent more positive
caregiv-
ing appraisals, except for the Subjective Burden subscale, where
higher scores indicate caregivers perceived higher subjective
burden.
In addition to the three subscales, the survey gathered
information on caregivers’ age, gender, and relation to care
receivers; the type of care provided; the hours of care
(including
hands-on and supervisory care) provided in the week prior to
the
survey; and the types of services that the caregivers received.
Agencies reported providing a list of services, including (1)
information about services, (2) assistance in accessing services,
(3) caregiver counseling, (4) caregiver education and training,
(5)
financial assistance, (6) respite services/adult day care, (7) help
with housework, (8) delivered meals, (9) transportation, (10)
cash
support. Caregivers were asked whether they had received each
of
these services. As a result of discussions with the FCSP team
and
with local service agencies, we further grouped these 10
services
into three categories based on the nature of the services: (1)
counseling and education services, (2) respite and supplemental
services, and (3) financial support services. The counseling and
education services category included information about
services,
assistance in accessing services, caregiver counseling, and care-
giver education and training. The respite services category
included respite services/adult day care, help with housework,
delivered meals, and transportation. The financial support
services
category included financial assistance and cash support for
caregiving. A general service satisfaction question was also
included in the ADS-CAS, with a 4-point scale response: ‘‘Poor
(1),’’ ‘‘Fair (2),’’ ‘‘Good (3),’’ or ‘‘Excellent (4).’’
2.3. Data collection methods
Each agency sent each of their clients a cover letter, a
questionnaire, and a postage-paid return envelope addressed to
ADS. To protect the clients’ confidentiality, the questionnaires
were anonymous, and no follow-up occurred. A total of 866
survey
packets were sent out, and 177 questionnaires (20.4%) were
returned.
2.4. Quantitative data analysis
Data analyses were conducted using the Statistical Package for
the Social Sciences (SPSS-PC) version 12.0. Prior to analyzing
the
data, all items were examined to assess the accuracy of variable
calculations and missing values. If variables were missing at a
rate
larger than 5%, multiple imputation was applied (Rubin, 1977;
Schafer, 1997, 1999, 2000). Cronbach’s alpha was used to
evaluate
the internal consistency of each subscale on the ADS-CAS.
Descriptive analyses were used to depict the characteristics of
caregivers and the services they received. Two steps were
included in the evaluation method. First, we examined the gaps
between the types of care that the caregivers provided and the
types of services that the caregivers received. Second,
MANCOVA
were used to compare caregivers’ appraisals in the following
categories: (1) those who reported using any of the 10 services
versus those who did not; (2) caregivers who used one
particular
service versus those who did not report using that particular
service (For example, in comparing caregivers who had received
financial services with caregivers who reported they had not
received
financial services, ‘‘users’’ may have received other services as
well as
financial services; ‘‘nonusers’’ of financial services may have
received
other services or may have reported not receiving any services);
and
(3) those who used only one out of the three categories of
services
versus those who did not use that particular service category.
(For
example, caregivers who had received services in the financial
services category only, and no services from other categories,
were
compared with caregivers who reported they had not received
any
services from the financial services category. ‘‘Nonusers’’ in a
category
may have received services in other categories, or may have
reported
not receiving any services at all.) Clients’ age, gender, and the
number of care activities they provided were controlled as
covariates. The outcome measures were: (1) the item mean of
ADS-CAS, (2–4) the item means of each of the three subscales
of
ADS-CAS, (5) the total hours the caregiver spent on caregiving
during the previous week, and (6) the caregiver’s satisfaction
with
services received.
2.5. Text summary
Caregivers’ text feedback was summarized and analyzed for
common themes using content analysis.
3. Results
The response rate was 20.4%. Five questionnaires were returned
blank, and two were returned with only text information.
Furthermore, 6 caregivers stated that they were not providing
any care at this point. As a result, only 164 questionnaires were
entered for quantitative data analysis, an 18.9% usable response
rate.
3.1. Psychometric properties of ADS-CAS
Most items in ADS-CAS were missing cases at a rate between
9%
and 13%. Therefore, multiple imputation was used (Rubin,
1977;
Schafer, 1997). After reverse coding and multiple imputation,
Cronbach’s alphas for ADS-CAS were 0.90. The power to detect
statistically significant differences in ADS-CAS between
caregivers
who reported having received at least one of the services and
those
who reported not receiving any services was .78.
3.2. Description of care provided by caregivers and of services
provided to caregivers
About 74% of caregivers were female, 17% were male, and 9%
did
not specify their gender. Their ages ranged from less than 20 to
more than 81 years old, with an average age of 57. The majority
stated that they were caring either for a spouse/partner (48.8%)
or
for parents (41.2%).
Caregivers provided from one to nine types of care to their care
receivers, with an average of 6.8 (SD = 2.24). About one third
provided all nine kinds of caregiving activities listed in the
questionnaire, including personal care, safety/supervision,
house-
keeping and laundry, meal preparation, medication monitoring,
transportation, shopping, financial management, and standby
help. The most common type of care provided was
transportation
(83.5%). Caregivers reported receiving a range of zero to seven
ADS
services, with an average of 1.91 (SD = 1.54). The service most
commonly used was information about services (52.4%). A
surprising percentage (14.6%) stated that they had not received
any services, even though all the caregivers surveyed had been
identified by agencies as service recipients. These caregivers
were
labeled ‘nonusers’ and used as the comparison group in the first
analysis. However, they cannot fully represent the real nonusers
in
the U.S. caregiver population. We discuss this issue further in
the
discussion section.
Table 2
MANCOVA results: marginal mean differences between users
of any services and nonusers, users of a particular service and
nonusers of that particular service, and users of a
single service category and nonusers of that service category (N
= 164).
ADS-caregiver appraisal scale
CAS SB CM CS HOUR SS
Individual services
Use or non-use of services �3.89 �1.58 �3.00** �3.46* 21.65
�0.03
Services information �1.39 �0.38 �2.02* �0.11 �11.27 �0.04
Assistance in accessing services 1.91 �1.53 �1.28 1.66 �11.72
0.28*
Caregiver counseling 1.48 �1.09 0.08 0.30 �12.03 0.32*
Caregiver training or education 2.36 �2.28 0.82 �0.74 �14.08
0.09
Financial assistance �10.82 3.57 �4.67* �2.49 �23.15 0.35
Respite services �7.69 2.02 �2.97 �2.70* 45.91*** 0.16
Help with housework �3.05 �0.40 �4.45* �1.0 4.46 0.17
Delivered meals �0.15 0.48 �0.25 �0.58 3.63 0.07
Transportation �5.81 1.06 �1.42 �3.33* �23.76 �0.15
Cash to support caregiving �2.40 3.70 0.02 1.28 �20.49 0.51*
Service catagories
Counseling and education 2.81 �2.80 �0.33 0.33 �15.39 �0.11
Respite �3.24 1.23 0.27 �2.28 39.28*** �0.05
Finance 28.17 �11.66 12.57* 6.63 7.38 0.06
[Bold] = 0.05 < p < 0.070; (users’ scores – nonusers’ scores).
Note: CAS, CAS score; SB, subjective burden score; CM,
caregiver mastery score; CS, caregiver satisfaction score;
HOUR, hours of care; SS, service satisfaction. Controlled for
caregivers’ age, gender, and number of caregiving activities that
they have provided as covariates.
* p < 0.05 [bold].
** p < 0.01 [bold].
*** p < 0.001 [bold]; (users’ scores – nonusers’ scores).
Table 3
Text summary.
Category Keywords Frequency
Exhausted caregivers Problem 4
Hard 3
Frustrate/stress/tire 6
My health 1
Collapse 1
Appreciation Thank/appreciate/grateful 16
Wonderful 4
Help needed Available 9
Need/need. . .help 32
Aware 1
Financial 6
[Staff] change 2
Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010)
113–119116
3.2.1. Gaps between care provided by caregivers and services
provided to caregivers
Potential gaps were found between care provided and services
received. The most common types of care provided by the
caregivers were transportation, financial management, and
medication monitoring. However, the most common services
that
these caregivers claimed to have received were services
informa-
tion, respite care, and assistance in accessing services. On the
one
hand, this indicates a good availability of the former three
services.
On the other hand, the services provided may not match what
caregivers need most, such as help with transportation. For
example, only 9.8% of caregivers received transportation
services,
while 83.5% of caregivers provided such services to their
relatives.
Another potential gap worth noting is the high rate of
medication
management assistance (79.9%) provided by caregivers versus
the
low rate of training and education received by caregivers (14%).
3.3. Mean score differences in outcome measures (ADS-CAS)
between
users of any services and nonusers, users of a particular service
and
nonusers of that particular service, and users of a single service
category and nonusers of that service category
After controlling for caregivers’ age, gender, and the number of
care activities provided, caregivers who received assistance in
accessing services, who used caregiver counseling services, or
who
obtained cash to support caregiving showed significantly higher
satisfaction toward the services that they received (p < 0.05).
Caregivers who received information about services, financial
assistance, and help with housework reported lower caregiver
mastery than did caregivers not using such services (p < 0.05).
Analysis of service categories revealed additional relationships.
The caregivers who received only financial support services
showed significantly higher mastery (p < 0.05) than did those
who did not use such services. The caregivers who received
only
respite services spent an average of 39.28 more hours caring for
care recipients in the week prior to our survey (p < 0.01) than
did
those not using respite services. Analysis also showed that
caregivers who received only counseling and education services
perceived less subjective caregiving burden (p = 0.056) than did
others, and that caregivers who received only financial support
services showed better overall caregiver appraisal (p = 0.058).
Analysis both of individual services and of the three service
categories revealed similar findings; therefore, a discussion of
individual services will not be given here. Table 2 presents full
results for both individual services and the three service
categories.
The results of the three analyses are presented in Table 2 as
mean
score differences between users of services and nonusers, users
of a
particular service and nonusers of that particular service, and
users
of a single service category and nonusers of that service
category.
3.4. Content analysis of open-ended comments
Seventy-two caregivers entered textual comments on their
questionnaires. Content analysis yielded 12 keywords and three
themes: (1) exhausted caregivers (keywords: problem, hard,
frustrate/stress/tire, my health, collapse), (2) appreciation of
services received (keywords: thank/appreciate/grateful, wonder-
ful), and (3) services needed (keywords: availability, need/
need. . .help, aware, financial, [staff]. . .change). Many
caregivers
reported fatigue due to their caregiver responsibilities. There
were
services that they indicated should be developed or improved in
order to provide better support. The need for integrative
services
was frequently cited: ‘‘I was frustrated though that there is no
one
person and agency available to counsel or ‘pull the picture’
together.’’
Caregivers also reported difficulties in assessing their needs and
in
determining which services were available to meet these needs:
Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010)
113–119 117
‘‘Often services tell you what they don’t do rather than ask
what you
need help with in caring for the person. By the time you sort
through it
all you find you don’t/can’t use services.’’ Some caregivers
reported
that it took too long to search for and then wait for services and
to
solve their immediate problems: ‘‘I wish I had taken this class
before
I got so worn out.’’ ‘‘I wish I knew these services were
available before
my mom passed away.’’ ‘‘If we can get transportation I expect
and
hope this will help [to solve the immediate conflict between this
caregiver and his/her spouse].’’ (Please see Table 3 for the
frequency of each key word in caregivers’ text feedback).
4. Discussion
Findings from the current study can contribute not only to the
growing body of research in the area of caregiving support but
also
to the future development of caregiving support services in the
King County region.
4.1. Different caregiver appraisals between service users and
nonusers
The use of different services was associated with different
caregiver appraisals, and these findings add a great deal to
caregiving
research literature. Using counseling and education services,
such as
caregiver counseling services, was associated with a lessening
of
subjective burden; and using financial support services, such as
cash
support, was associated with a more beneficial caregiver
appraisal.
Although in this study using respite and supplemental services
was
not associated with any beneficial outcome from the caregivers’
point of view, it is still important to provide such support
services.
Details about each service category are discussed below.
4.2. Counseling and education services
Research findings have shown that counseling and education
services are effective in helping caregivers to deal with their
own
psychological needs and in improving caregivers’ relationships
with care recipients (Brodaty et al., 2003; Burns et al., 2003;
Coon
et al., 2003). The findings in the current study support this
literature. By using counseling and education services,
caregivers
reduced their subjective burden. Our participants provided
numerous remarks explaining that having someone to talk to or
attending a support group or counseling class can prevent
caregiver ‘‘burn out,’’ and they said that because of these
services
they saved time that would otherwise have been spent dealing
with their negative emotions. However, it is also important to
point out the possibility that caregivers who already perceive
fewer burdens would be more likely to use this type of service.
Those caring for someone with more severe disabilities and
those
who have fewer sources of caregiving help may perceive a
higher
burden and therefore have less energy to use these services
(Markle-Reid & Browne, 2001; Toseland, McCallion, Gerber, &
Banks, 2002). It is important to consider whether or not
counseling
and education services are more useful for caregivers with light
care loads. A full examination of this topic will require future
studies with more rigorous methodologies (such as randomized
controlled trials or quasi-experimental designs).
4.3. Financial services
Compared to the other two categories of service, using financial
support services was associated with more positive caregiving
appraisals. These services provide a flexible pool of funds to
Medicaid-eligible persons to purchase goods or services for
family
caregivers. Since, according to one study, one third of family
caregivers reported the loss of all of their family savings (GAO,
1994), providing financial services is likely to give caregivers
the
opportunity to focus on their caregiving activities and to
develop
higher confidence and satisfaction. However, we should not
ignore
another possible explanation—that caregivers who were able to
gain access to these funds were competent users of the system
who
already had higher caregiving mastery and caregiving appraisal.
Providing financial support to caregivers is a relatively new
service
developed in the last 15 years (Doty, Jackson, & Crown, 1998).
Only
a limited number of research studies have examined the effect
of
financial support services for caregivers (Eckert, Morgan, &
Swamy,
2004; Mahoney, Simon-Rusinowitz, Loughlin, Desmond, &
Squil-
lace, 2004). The findings of this study encourage further
investigation of the cost-effectiveness of providing financial
support services to caregivers.
4.4. Respite and supplemental services
In contrast to findings in previous literature (Cox, 1997;
Gaugler
et al., 2003a,b; Krout, 1995; Zarit et al., 1998), this study found
that
caregivers using respite and supplemental services spent more
hours on caregiving than nonusers and they did not show any
positive caregiver appraisals. Although caregivers in the current
study who used this type of service did not report any beneficial
outcomes, it is important to recognize that this group of
caregivers
might be under a great deal of stress due to their care
responsibilities and that they might still have a significant need
for such support. Further analysis showed that older caregivers
were the group who most used respite services and who most
often
requested help with housework; this group also spent more time
on caregiving. This group of caregivers was more likely to be
made
up of spouses than of children, and they may also have more
health
problems of their own. Thus, this group might be more likely to
be
on the edge of giving up caregiving out of exhaustion. The
services
that they received apparently did not meet their needs. It is
crucial
to learn more about the needs of this group of caregivers and to
modify these services to meet their needs.
4.5. Services that should be developed
Helping caregivers to ‘‘pull the picture together’’ should be the
first task for case managers and service providers when first
contacting caregivers. Caregivers are already exhausted from
their
caregiving tasks, and it is an added burden for them to try to
find
support from different resources. It will be crucial to develop a
single window that could both provide all the information that
caregivers need and help them access services in a more
efficient
manner. This would assure that available services are used by
those who need them. Service availability is an important issue,
and it requires more attention from providers and policy
makers.
Several potential gaps were noted between care activities and
services received by caregivers, such as in transportation and
medication management. These potential gaps may either
indicate
low service availability or accessibility, which needs
improvement,
or may simply indicate that caregivers were confident in their
ability
to provide such care activities and had no need for additional
support
services. Given that transportation was the most common type
of
care activity provided by caregivers in the current study, and
was
commonly mentioned in the text feedback from our caregivers,
it
would likely be one of the services that caregivers would use if
it
were more available to them. The results showed that caregivers
need access to transportation services and expect that getting
such
services would solve their current problems. Furthermore, in a
recent national study, the rate of transportation services used by
caregivers was almost twice the rate reported in the current
study
(9.8% in the current study vs. 18% in the AARP study) (AARP,
2004).
This suggests that there is a need to make this service more
available
to caregivers in King County.
Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010)
113–119118
There is also a potential gap between the high rate of
medication management assistance and the low rate of training
services on this subject. The percentage of caregivers in the
current
study who provided medication monitoring was much higher
than
the percentage of caregivers in the national study who did so
(80%
vs. 41%) (AARP, 2004). This strongly suggests the importance
of
providing more medication management education programs for
caregivers in this region. The low rate of caregiver education
and
training programs in this region indicates a gap needs to be
filled.
For future study on caregivers living in Seattle/King County, it
might be important to investigate their knowledge about the
medications they give to their care receivers and what kind of
support they need to help them perform this care activity better.
Another potential gap worth noting is that 14.6% of caregivers
stated that they did not receive any services. All the caregivers
surveyed were listed by the agencies as having received some
form
of services. It is intriguing that this group either did not
remember
receiving or did not believe they had received services. It could
be
that the amount of services received was not substantial enough
for caregivers to note, or that the services provided were not
what
these caregivers were looking for. Both possibilities indicated
inadequate services on this topic and required research study to
further investigate. Also, further study of the differences
between
this group of caregivers, who likely received some services they
did
not remember or report, and real nonusers, who are in need but
do
not receive any services, will be important.
5. Limitations
There are several limiting methodological issues in this study.
The cross-sectional design made it impossible to draw causal
inferences. The long and variable time between when caretakers
received services and when they responded to the ADS-CAS
was a
threat to validity. Adding a variable to assess the time between
service use and survey response is recommended for future
studies. Moreover, past research has found that caregivers’
perceptions of distress may be influenced by different factors at
different stages of their caregiving (Vitaliano et al., 2002).
Therefore, a longitudinal follow-up would help to determine the
optimal time to provide caregivers with certain kinds of
services,
and this information would be valuable for making future
policy.
Another limitation of this study was the low response rate of
20.4%. We have explored potential reasons for the low response
rate.
The first challenge that might have contributed to the low
response
rate is the fact that many caregivers do not self-identify with
the
term ‘‘caregiver.’’ That is probably why five questionnaires
were
returned blank. This has been a recurring theme and a challenge
for
the implementation and evaluation of family caregiver supports
in
the United States (Feinberg & Newman, 2006). Furthermore, the
low
response rate may well have to do with the substantial length of
the
study’s questionnaire. Jepson, Asch, Hershey, & Ubel (2005)
studied
the correlation between response rate and length of
questionnaires
and suggested that questionnaires above a threshold of 1000
words
have lower response rates. Our survey questionnaire was over
5000
words, even after we removed two subscales from the CAS.
Detailed information about care recipients was not collected for
the same reasons stated above. This may limit the study finding
to be
generated to the caregiver population. We believe that the
characteristics of caregivers in the current study may be close to
the general caregiver population; the current study’s
demographics
show similar composition of age, gender, and number of
services
provided and received compared with caregiver demographics in
the National Family Caregiver Study (AARP, 2004). There may
well
be differences in other variables, of course. Increasing the
response
rate in future work will be important and can be addressed by
further decreasing the length of the questionnaire. Other
methods
that might increase the response rate in future studies include
offering incentive payments, performing a follow-up survey, or
providing a token for increasing response rates.
6. Lessons learned
6.1. Lesson learned for health care professionals
Findings from this study provide information for community
service providers, such as community nurses, case managers,
and
social workers, to better understand the relationships between
caregivers’ service use and caregiver appraisals. Sometimes
caregiving responsibilities begin without any warning, and care-
givers have no time to prepare themselves before assuming
these
responsibilities. They may not know what services are available
or
what services could be the most helpful. Knowledge generated
from
the current study can help case managers and service providers
to
help caregivers and care recipients anticipate and prioritize
their
needs, and to better support caregivers with the services they
need
most. For example, if a family caregiver expresses great
subjective
burden, our study findings suggest that case managers should
think
about offering counseling and education services first.
6.2. Lesson learned for area agencies of aging in the United
States
Our experience of evaluating the FCSP program at the county
level will be beneficial for other AAAs, particularly with
respect to
our collaboration experience with local service agencies. This
study helps to address Feinberg and Newman’s (2006) call for a
uniform assessment tool that can help us to understand and
redress the unevenness in current caregiver service programs.
The
tool that we developed as a result of this study has great
potential
to become a standard tool for other states or AAAs to use. Our
report of the gaps between the care provided and the services
received by caregivers in King County is another important
method
of looking at service adequacy in a region, and this also could
be
easily adopted in other areas.
6.3. Lesson learned for future questionnaire development
The evaluation of individual services revealed similar findings
to the evaluation of the three service categories. Therefore, we
recommend listing three service categories in future question-
naires instead of listing all of the detailed services provided by
agencies. Listing all the individual services not only increases
the
time required to complete the questionnaire, but might also
unnecessarily confuse caregivers, since questionnaires that list
individual services require the caregiver to be able to identify
what
particular service(s) they received.
7. Conclusion
This pilot study was designed to evaluate the FCSP, but it also
provided valuable information about the effect of caregiver
support services as well as an understanding of what types of
services might be associated with particular caregiver outcomes.
These findings are useful to community care professionals and
are
also of practical value to program planners, policy makers, and
formal care providers. The study’s findings can also serve as a
basis
for more rigorous future evaluations of caregiver support
services.
Acknowledgements
The authors greatly appreciate the support and advice of
Rosemary Cunningham, Margaret Casey, and all of the team
members on the Family Caregiver Support Program at Aging
and
Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010)
113–119 119
Disability Services. The authors would also like to extend their
gratitude to Senior Services, the Evergreen Healthcare-Geriatric
Regional Assessment Team, the Northshore Senior Center, and
the
Kin On Community Caregiver Network-Caregiver Support.
Their
gracious help made this study possible.
References
AARP. (2004). Caregiving in the U.S.: National Alliance for
Caregiving and AARP.
ADS. (2003). Cash and Counseling Pilot Project. Seattle: Aging
and Disability Services.
Berry, G. L., Zarit, S. H., & Rabatin, V. X. (1991). Caregiver
activity on respite and nonrespite
days: A comparison of two service approaches. Gerontologist,
31(6), 830–835.
Brodaty, H., Green, A., & Koschera, A. (2003). Meta-analysis
of psychosocial interven-
tions for caregivers of people with dementia. Journal of the
American Geriatrics
Society, 51(5), 657–664.
Burns, L. R., Walston, S. L., Alexander, J. A., Zuckerman, H.
S., Andersen, R. M., Torrens, P.
R., et al. (2001). Just how integrated are integrated delivery
systems? Results from
a national survey. Health Care Management Review, 26(1), 20–
39.
Burns, R., Nichols, L. O., Martindale-Adams, J., Graney, M. J.,
& Lummus, A. (2003).
Primary care interventions for dementia caregivers: 2-year
outcomes from the
REACH study. Gerontologist, 43(4), 547–555.
Chang, B. L. (1999). Cognitive-behavioral intervention for
homebound caregivers of
persons with dementia. Nursing Research, 48(3), 173–182.
Coleman, B. P., & Pandya, S. M. (2002). Family caregiving and
long-term care. Washing-
ton, DC: Public Policy Institute; AARP.
Coon, D. W., Thompson, L., Steffen, A., Sorocco, K., &
Gallagher-Thompson, D. (2003).
Anger and depression management: Psychoeducational skill
training interventions
for women caregivers of a relative with dementia.
Gerontologist, 43(5), 678–689.
Cox, C. (1997). Findings from a statewide program of respite
care: A comparison of
service users, stoppers, and nonusers. Gerontologist, 37(4),
511–517.
Deeken, J. F., Taylor, K. L., Mangan, P., Yabroff, K. R., &
Ingham, J. M. (2003). Care for the
caregivers: A review of self-report instruments developed to
measure the burden,
needs, and quality of life of informal caregivers. Journal of Pain
and Symptom
Management, 26(4), 922–953.
Doty, P., Jackson, M. E., & Crown, W. (1998). The impact of
female caregivers’ employment
status on patterns of formal and informal eldercare.
Gerontologist, 38(3), 331–341.
Eckert, J. K., Morgan, L. A., & Swamy, N. (2004). Preferences
for receipt of care among
community-dwelling adults. Journal of Aging & Social Policy,
16(2), 49–65.
Edelman, P., & Hughes, S. (1990). The impact of community
care on provision of informal
care to homebound elderly persons. Journal of Gerontology,
45(2), S74–S84.
Feinberg, L. F., & Newman, S. L. (2006). Preliminary
experiences of the states in
implementing the National Family Caregiver Support Program:
A 50-state study.
Journal of Aging & Social Policy, 18(3–4), 95–113.
Gallagher-Thompson, D., Coon, D. W., Solano, N., Ambler, C.,
Rabinowitz, Y., & Thomp-
son, L. W. (2003). Change in indices of distress among Latino
and Anglo female
caregivers of elderly relatives with dementia: Site-specific
results from the REACH
national collaborative study. Gerontologist, 43(4), 580–591.
GAO. (1994). Long Term Care Population. Washington, D.C.:
United States General
Accounting Office.
Gaugler, J. E., Jarrott, S. E., Zarit, S. H., Stephens, M. A.,
Townsend, A., & Greene, R.
(2003a). Adult day service use and reductions in caregiving
hours: Effects on stress
and psychological well-being for dementia caregivers.
International Journal of
Geriatric Psychiatry, 18(1), 55–62.
Gaugler, J. E., Jarrott, S. E., Zarit, S. H., Stephens, M. A.,
Townsend, A., & Greene, R.
(2003b). Respite for dementia caregivers: The effects of adult
day service use on
caregiving hours and care demands. International
Psychogeriatrics, 15(1), 37–58.
Gitlin, L. N., Winter, L., Corcoran, M., Dennis, M. P.,
Schinfeld, S., & Hauck, W. W. (2003).
Effects of the home environmental skill-building program on the
caregiver-care
recipient dyad: 6-month outcomes from the Philadelphia
REACH Initiative. Ger-
ontologist, 43(4), 532–546.
Gottlieb, B., & Johnson, J. (1993). Impact of Day Care
Programs on Family Caregivers of Persons
with Dementia. Guelph, Ontario: Gerontology Research Centre,
University of Guelph.
Hepburn, K. W., Tornatore, J., Center, B., & Ostwald, S. W.
(2001). Dementia family
caregiver training: Affecting beliefs about caregiving and
caregiver outcomes.
Journal of the American Geriatrics Society, 49(4), 450–457.
Jepson, C., Asch, D. A., Hershey, J. C., & Ubel, P. A. (2005).
In a mailed physician survey,
questionnaire length had a threshold effect on response rate.
Journal of Clinical
Epidemiology, 58(1), 103–105.
Kosloski, K., & Montgomery, R. J. V. (1993). The effects of
respite on caregivers of
Alzheimer’s patients: One year evaluation of the Michigan
model respite programs.
Journal of Applied Gerontology, 12(1), 4–7.
Kosloski, K., & Montgomery, R. J. (1994). Investigating
patterns of service use by
families providing care for dependent elders. J Aging Health,
6(1), 17–37.
Kristjanson, L. J., Atwood, J., & Degner, L. F. (1995). Validity
and reliability of the family
inventory of needs (FIN): Measuring the care needs of families
of advanced cancer
patients. Journal of Nursing Measurement, 3(2), 109–126.
Krout, J. A. (1995). Senior centers and services for the frail
elderly. Journal of Aging &
Social Policy, 7(2), 59–76.
Lawton, M. P., & Brody, E. M. (1969). Assessment of older
people; self-maintaining and
instrumental activities of daily living. The Gerontologist, 9,
179–186.
Lawton, M. P., Kleban, M. H., Moss, M., Rovine, M., &
Glicksman, A. (1989). Measuring
caregiving appraisal. Journal of Gerontology, 44(3), P61–71.
Lee, H., & Cameron, M. (2004). Respite care for people with
dementia and their carers.
Cochrane Database of Systematic Reviews, 2, CD004396.
Maas, M. L., Reed, D., Park, M., Specht, J. P., Schutte, D.,
Kelley, L. S., et al. (2004).
Outcomes of family involvement in care intervention for
caregivers of individuals
with dementia. Nursing Research, 53(2), 76–86.
Mahoney, K. J., Simon-Rusinowitz, L., Loughlin, D. M.,
Desmond, S. M., & Squillace, M. R.
(2004). Determining personal care consumers’ preferences for a
consumer-direc-
ted cash and counseling option: Survey results from Arkansas,
Florida, New Jersey,
and New York elders and adults with physical disabilities.
Health Services Research,
39(3), 643–664.
Markle-Reid, M., & Browne, G. (2001). Explaining the use and
non-use of community-
based long-term care services by caregivers of persons with
dementia. Journal of
Evaluation in Clinical Practice, 7(3), 271–287.
Montgomery, R. J., & Borgatta, E. F. (1989). The effects of
alternative support strategies
on family caregiving. Gerontologist, 29(4), 457–464.
Montgomery, R. J. V., Gonyea, J. G., & Hooyman, N. R. (1985).
Caregiving and the
experience of subjective and objective. Family Relations, 34,
19–26.
Montoro-Rodriguez, J., Kosloski, K., & Montgomery, R. J.
(2003). Evaluating a practice-
oriented service model to increase the use of respite services
among minorities and
rural caregivers. Gerontologist, 43(6), 916–924.
Newcomer, R., Yordi, C., DuNah, R., Fox, P., & Wilkinson, A.
(1999). Effects of the
Medicare Alzheimer’s Disease Demonstration on caregiver
burden and depression.
Health Services Research, 34(3), 669–689.
Okamoto, M., Murashima, S., & Saito, E. (1998). Effectiveness
of day care service for elderly
patients with dementia and their caregivers as observed by
comparison of days with
and without day care services. Nippon Koshu Eisei Zasshi,
45(12), 1152–1161.
Pearlin, L. I., & Schooler, C. (1978). The structure of coping.
Journal of Health and Social
Behavior, 19(1), 2–21.
Quayhagen, M. P., Quayhagen, M., Corbeil, R. R., Hendrix, R.
C., Jackson, J. E., Snyder, L.,
et al. (2000). Coping with dementia: Evaluation of four
nonpharmacologic inter-
ventions. International Psychogeriatrics, 12(2), 249–265.
Roberts, J., Browne, G., Milne, C., Spooner, L., Gafni, A.,
Drummond-Young, M., et al.
(1999). Problem-solving counseling for caregivers of the
cognitively impaired:
Effective for whom? Nursing Research, 48(3), 162–172.
Rubin, D. B. (1977). Formalizing the subjective notions about
the effect of nonrespondents
in sample surveys. Journal of the American Statistical
Association, 72(35.), 538–543.
Schafer, J. L. (1997). Analysis of incomplete multivariate data.
London: Chapman & Hall.
Schafer, J. L. (1999). Multiple imputation: A primer. Statistical
Methods in Medical
Research, 8, 3–15.
Schafer, J. L. (2000). Software for multiple imputation, 2000.
Retrieved May 5, 2002, from
http://www.stat.psu.edu/�jls/misoftwa.html.
Toseland, R. W., Blanchard, C. G., & McCallion, P. (1995). A
problem solving intervention
for caregivers of cancer patients. Social Science & Medicine,
40(4), 517–528.
Toseland, R. W., McCallion, P., Gerber, T., & Banks, S. (2002).
Predictors of health and
human services use by persons with dementia and their family
caregivers. Social
Science & Medicine, 55(7), 1255–1266.
Toseland, R. W., McCallion, P., Smith, T., & Banks, S. (2004).
Supporting caregivers of
frail older adults in an HMO setting. The American Journal of
Orthopsychiatry, 74(3),
349–364.
Toseland, R. W., McCallion, P., Smith, T., Huck, S., Bourgeois,
P., & Garstka, T. A. (2001).
Health education groups for caregivers in an HMO. Journal of
Clinical Psychology,
57(4), 551–570.
Tourigny, A., Durand, P., Bonin, L., Hebert, R., & Rochette, L.
(2004). Quasi-experimental
Study of the Effectiveness of an Integrated Service Delivery
Network for the Frail
Elderly. Can J Aging, 23(3), 231–246.
Tringali, C. A. (1986). The needs of family members of cancer
patients. Oncology Nursing
Forum, 13(4), 65–70.
Vitaliano, P. P., Scanlan, J. M., Zhang, J., Savage, M. V.,
Hirsch, I. B., & Siegler, I. C. (2002). A
path model of chronic stress, the metabolic syndrome, and
coronary heart disease.
Psychosomatic Medicine, 64(3), 418–435.
Vitaliano, P. P., Young, H. M., & Russo, J. (1991). Burden: A
review of measures used
among caregivers of individuals with dementia. Gerontologist,
31(1), 67–75.
Weuve, J. L., Boult, C., & Morishita, L. (2000). The effects of
outpatient geriatric
evaluation and management on caregiver burden. Gerontologist,
40(4), 429–436.
Yordi, C., DuNah, R., Bostrom, A., Fox, P., Wilkinson, A., &
Newcomer, R. (1997).
Caregiver supports: outcomes from the Medicare Alzheimer’s
disease demonstra-
tion. Health Care Financing Review, 19(2), 97–117.
Zank, S., & Schacke, C. (2002). Evaluation of geriatric day care
units: Effects on patients
and caregivers. The Journals of Gerontology, Psychological
Sciences and Social
Sciences, 57(4), P348–357.
Zarit, S. H., Stephens, M. A., Townsend, A., & Greene, R.
(1998). Stress reduction for
family caregivers: Effects of adult day care use. The Journals of
Gerontology,
Psychological Sciences and Social Sciences, 53(5), S267–277.
Ya-Mei Chen, Ph.D. MPH, Dr. Chen’s research focus is the
development of community-
based long-term care services for elders and their family. With
her expertise in
program intervention and evaluation, Dr. Chen has been
involved in several federal
and state-funded projected projects to help develop and evaluate
programs specific for
community elders.
Susan C. Hedrick, Ph.D., Dr. Hedrick’s research focus is the
cost-effectiveness of
interventions to improve care for persons with chronic illnesses.
Heather M. Young, Ph.D., Dr. Young’s research and clinical
interests focus on envir-
onments that promote healthy aging. She has played an
instrumental role in shaping
long-term care policies in Washington State through her
evaluation research.
http://www.stat.psu.edu/~jls/misoftwa.html
http://www.stat.psu.edu/~jls/misoftwa.htmlA pilot evaluation of
the Family Caregiver Support ProgramIntroductionBackground
of the problemPurpose of studyMethodsDesign, setting, and
participantsQuestionnaire developmentData collection
methodsQuantitative data analysisText
summaryResultsPsychometric properties of ADS-
CASDescription of care provided by caregivers and of services
provided to caregiversGaps between care provided by caregivers
and services provided to caregiversMean score differences in
outcome measures (ADS-CAS) between users of any services
and nonusers, users of a particular service and nonusers of that
particular service, and users of a single service category and
nonusers of that service categoryContent analysis of open-ended
commentsDiscussionDifferent caregiver appraisals between
service users and nonusersCounseling and education
servicesFinancial servicesRespite and supplemental
servicesServices that should be developedLimitationsLessons
learnedLesson learned for health care professionalsLesson
learned for area agencies of aging in the United StatesLesson
learned for future questionnaire
developmentConclusionAcknowledgementsReferences
Guide to Program Evaluation
Getting Started
What is Evaluation; Types of Evaluation Activities; Benefits of
Evaluation; Evaluation
Concerns; Evaluation Constraints
Planning the Evaluation
Are You Ready for Evaluation; Working With an Outside
Evaluator; Developing an Evaluation
Plan; Developing and Working With Program Logic Models
Assessing Program Performance
Identifying Goals and Objectives; Measuring Activities and
Outputs (Process Evaluation);
Measuring Outcomes (Impact Evaluation); Establishing the
"Activities-Outcomes" Connection
(Evaluation Experiments)
Data Collection
New or Existing Data; Using Existing Data; Using New Data;
Other Considerations
Reporting and Using Evaluation Results
Reviewing Evaluation Findings With Stakeholders; Writing a
Final Report; Using Evaluation
Results
Getting Started
What Is Evaluation?
Evaluation is a systematic, objective process for determining
the success of a policy or program.
It addresses questions about whether and to what extent the
program is achieving its goals and
objectives.
Learn More...
A Typology of Evaluation Levels (Office of Juvenile Justice
and Delinquency Prevention)
An Overview of Education Evaluation (Department of
Education)
Developing a Strategy for Evaluation (National Institute of
Justice)
Identifying Effective Criminal Justice Programs: Guidelines and
Criteria for the Nomination of
Effective Programs (Bureau of Justice Assistance)
Underlying Premise of Assessment and Evaluation (Bureau of
Justice Assistance)
Types of Evaluation Activities
Program Monitoring
Program monitoring involves the ongoing collection of
information to determine if programs are
operating according to plan. Monitoring provides ongoing
information on program
implementation and functioning.
Learn More...
Basic Monitoring and Comparative Monitoring (Office of
Juvenile Justice and Delinquency
Prevention)
Install a Monitoring System to Provide Continuous Feedback
(National Institute of Justice)
Selecting an Evaluation Design (National Institute of Justice)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/typo
logy_of_evaluation_levels.htm
http://www.ed.gov/offices/OUS/PES/primer1.html
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap
ter_1_nij_guide.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impr
oving.html
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impr
oving.html
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/unde
rlying_handbook1.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi
c_monitoring.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/inst
all_a_monitor.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/eval
uation_strategies.html#p8
Performance Measurement/Assessment
Program measurement or assessment involves the ongoing
collection of information on whether
a program is meeting its goals and objectives. Performance
measures can address project
activities, services delivered, and the products of those services.
Learn More...
Introduction (Fairfax County Department of Management and
Budget, pp. 4-7)
Types of Program Performance Assessment (Government
Accounting Office)
Using Indicators Effectively (Vera Institute of Justice, pp. 2-15)
Process or Implementation Evaluation
Process evaluation focuses on program implementation and
operation. A process evaluation can
answer questions regarding program effort; identify processes
or procedures used to carry out the
functions of the program; and address program operation and
performance.
Learn More...
Documenting and Analyzing Program Installation and
Operations (Department of Education)
Implement a Process Evaluation to Document What is Done,
When, By Whom, To Whom
(National Institute of Justice)
Process Evaluation (Bureau of Justice Assistance)
Outcome or Impact Evaluation
This type of evaluation focuses on program success and
accomplishments. These evaluations
answer questions regarding program effectiveness; address
whether a program is achieving its
goals and objectives; and examine unintended consequences,
both positive and negative.
Learn More...
Basic Outcome Evaluation and Comparative Outcome
Evaluation (Office of Juvenile Justice and
Delinquency Prevention)
Impact Evaluation (Bureau of Justice Assistance)
Observing Behavioral Outcomes and Attributing Changes to the
Program (Department of
Education)
http://www.co.fairfax.va.us/gov/omb/Basic_Manual.pdf
http://www.gao.gov/special.pubs/gg98026.pdf
http://www.vera.org/publication_pdf/207_404.pdf
http://www.ed.gov/offices/OUS/PES/primer4.html
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impl
ement_a_process.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/proc
ess_evaluation_gangs.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi
c_outcome_evaluation.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi
c_outcome_evaluation.htm#comparative
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/imp
act_eval_gangs.htm
http://www.ed.gov/offices/OUS/PES/primer5.html
Cost-Effectiveness and Cost-Benefit
Assessment
These assessments focus on using the results from a sound
program evaluation to assess how
effective the program is relative to other program alternatives in
terms of cost. Cost-benefit
analysis does not answer the question of whether the program
works; instead, it uses the results
of evaluations to compare the economic value of the outcomes
and costs of one program with
another.
Learn More...
Comparative Costs and Benefits of Programs to Reduce Crime,
Version 4.0 (Washington State
Institute for Public Policy)
Distinguishing Cost-Benefit Analysis from Program Evaluation
(Justice Research and Statistics
Association, p. 6)
Benefits of Evaluation
Programs that participate in evaluations will obtain objective
information about their
performance and how it can be improved. Evaluation can
provide objective evidence that a
program is effective, demonstrating positive outcomes to
funding sources and the community. It
can help improve program effectiveness and can create
opportunities for programs to share
information with other similar programs and agencies.
Programs can use evaluation findings in a number of ways. For
example, the program, to make a
case for continued funding and to attract new funding sources,
can use evidence of program
success. A well-executed evaluation will point out areas in
which the program can improve its
operations. Also, sharing the results of evaluation has benefits
to others outside of the program
seeking to replicate justice interventions that work.
Learn More...
Benefits of Evaluation (Department of Housing and Urban
Development)
Introduction (National Institute of Justice)
http://www.wsipp.wa.gov/rptfiles/costbenefit.pdf
http://www.jrsa.org/pubs/juv-justice/briefing_cost-benefit.html
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/bene
fits_of_evaluation.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/nijg
uide-intro.htm
Evaluation Concerns
Program managers and staff can sometimes be reluctant
participants in the evaluation process.
Below are some frequently-expressed concerns about program
evaluation and responses to those
concerns.
Concern: Evaluation draws resources away from program
services.
Response: Without evaluation, how do you know that the
services being provided are
effective? Program managers can explore options for obtaining
evaluation
services inexpensively.
Concern: Evaluation increases the burden on program staff.
Response: Evaluators can often implement changes to current
client data collection
procedures, resulting in little additional effort on the part of
program staff.
To reduce the burden and increase "buy-in," program staff
should be
involved in designing evaluation instruments and interpreting
evaluation
findings.
Concern: Evaluation is too complicated for program managers
and staff to
understand.
Response: An evaluation does not need to have the most
rigorous scientific method,
design, and analysis to be considered useful and valuable.
Evaluation
findings should be expressed in a manner that can be readily
understood
and used by program managers, staff, and other stakeholders.
Concern: Evaluation may produce negative results that will
harm the program.
Response: A good evaluation will point out both program
strengths and weaknesses.
No reputable evaluator will willingly participate in an
evaluation designed
to harm a program.
Learn More...
Common Concerns about Evaluation (Department of Housing
and Urban Development)
Guide to Frugal Evaluation for Criminal Justice (National
Institute of Justice, Chapter 6)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/com
mon_concerns_about_eval.htm
http://www.ncjrs.org/pdffiles1/nij/187350.pdf
Evaluation Constraints
Every evaluation is carried out under certain constraints or
limitations. These constraints should
be identified as part of the planning process for the evaluation.
Two major evaluation constraints
are time and cost. Evaluation results that are not timely are not
useful to program managers and
funding agencies. When evaluation information is needed
quickly, the evaluation must address
fewer questions. Similarly, the financial resources available for
the evaluation help to determine
its scope. The strengths and weaknesses of various evaluation
approaches should be considered
while keeping in mind the level of resources available.
Learn More...
Considering the Evaluation's Constraints (General Accounting
Office)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/cons
idering_the_evaluation.htm
Planning the Evaluation
Are You Ready for Evaluation?
Not all programs are ready to be evaluated; that is, they are not
able to provide information or
otherwise fully participate in the evaluation. To determine
whether a program is ready for
evaluation, evaluators have developed the process of
"evaluability assessment." An evaluability
assessment, undertaken prior to an evaluation, is designed to
address the question of whether the
program can participate fully in an evaluation. Some examples
of questions that can be addressed
in an evaluability assessment are listed below.
Is there a formal program design or model in place?
Programs must be able to document their goals and objectives,
and the strategies they
employ to achieve those goals and objectives.
Is the program design or model a sound one?
If program goals are unrealistic or strategies are not based in
theory or prior evidence, or
if program managers cannot explain how the activities and
services they provide are
expected to lead to the program’s desired outcomes, then
evaluation is not a good
investment.
Can the program participate in the evaluation?
Evaluations require data and information. If the program does
not collect data, and has no
capacity to generate data, then the evaluation will not be
successful.
Example of an Evaluability Assessment
The Youth Monitoring Program
Learn More...
Assessing Readiness for Evaluation (National Institute of
Justice)
Determining Whether to Evaluate at All (National Institute of
Justice)
Evaluability Assessment: Examining the Readiness of a
Program for Evaluation (Justice
Research and Statistics Association)
javascript:loadPOP('evaluabilityassessment.html')
http://www.jrsa.org/pubs/juv-justice/evaluability-assessment-
appendix.pdf
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/eval
uation_strategies_p7_8.html
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/nijg
uide.html#determining
http://www.jrsa.org/pubs/juv-justice/evaluability-
assessment.pdf
Selecting Critical Programs (Department of Housing and Urban
Development)
Time Frame for Evaluation (Office of Juvenile Justice and
Delinquency Prevention)
Working With an Outside Evaluator
One of the first issues that programs need to address when
considering an evaluation is whether
to use an evaluation expert, and whether that person can be in-
house (if such expertise exists) or
outside of the agency or program being evaluated. If funds are
available, a trained and
experienced evaluator can be of great assistance to a program
throughout the evaluation process.
If in-house expertise is available, the advantages and
disadvantages of using this person or an
external evaluator must be weighed.
Regardless of whether the evaluator is internal or external to the
agency being evaluated, finding
a qualified evaluator is essential. A qualified evaluator should
be experienced in evaluating
similar programs; should try to balance the needs and concerns
of a variety of decision-makers,
including the program managers, with issues related to the
objectivity of the evaluation; and
should be able to communicate with a wide variety of
individuals who have an interest in the
results of their work.
Learn More...
Building Evaluation into a Program RFP and Preparing an
Evaluation RFP (Office of Juvenile
Justice and Delinquency Prevention)
Choosing an Evaluator (Office of Juvenile Justice and
Delinquency Prevention)
Conducting Evaluations In-House or Under Contract (National
Institute of Justice)
Hiring and Working with an Evaluator (Justice Research and
Statistics Association)
Who Should Conduct Your Evaluation? (Department of Housing
and Urban Development)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentb.html#critical
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentg.html#timeframe
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/buil
ding_evaluation_into_a_progr.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/buil
ding_evaluation_into_a_progr.htm#preparing
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/choo
sing_an_evaluator.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap
ter_4_nij_guide.htm
http://www.jrsa.org/njjec/publications/evaluator.pdf
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap
ter_3_housing.htm
Developing an Evaluation Plan
Once you have determined that you are ready for evaluation and
have decided who will conduct
the evaluation, the next step is to develop an evaluation plan.
An evaluation plan is a description
of the evaluation process. Some of the key elements that should
be addressed in the evaluation
plan include: who is the target audience for the evaluation; what
evaluation questions will be
asked; how the evaluation will be designed; what data will be
collected, how and by whom; and
what final products will be produced.
The evaluation plan should detail the roles that various
individuals will play in the evaluation
process; these individuals include the evaluator, the program
manager, staff, clients, and any
other stakeholders. Opportunities for preliminary review of
findings and conclusions should be
built into the plan.
Learn More...
Developing an Evaluation Plan (Department of Housing and
Urban Development).
Developing an Evaluation Plan (Justice Research and Statistics
Association, p. 7)
Steps in Planning Evaluations (U.S. Department of Education)
Developing and Working with Program Logic
Models
While there are many forms, logic models specify relationships
among program goals,
objectives, activities, outputs, and outcomes. Logic models are
often developed using graphics or
schematics and allow the program manager or evaluator to
clearly indicate the theoretical
connections among program components: that is, how program
activities will lead to the
accomplishment of objectives, and how accomplishing
objectives will lead to the fulfillment of
goals. In addition, logic models used for evaluation include the
measures that will be used to
determine if activities were carried out as planned (output
measures) and if the program's
objectives have been met (outcome measures).
Why Use a Logic Model?
Logic models are a useful tool for program development and
evaluation planning for several
reasons:
• They serve as a format for clarifying what the program hopes
to achieve;
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/deve
loping_an_evalu.htm
http://www.jrsa.org/pubs/juv-justice/briefing_evaluator.html
http://www.ed.gov/offices/OUS/PES/primer3.html
• They are an effective way to monitor program activities;
• They can be used for either performance measurement or
evaluation;
• They help programs stay on track as well as plan for the
future; and
• They are an excellent way to document what a program
intends to do and what it is
actually doing.
Learn More About What a Logic Model Is
and Why To Use It
Developing a Logic Model (The Urban Institute)
Developing and Using a Logic Model (The Urban Institute)
A Guide on Logic Model Development for CDC’s Prevention
Research Centers (Sundra,
Scherer, and Anderson)
Logic Model for Program Planning and Evaluation (University
of Idaho-Extension)
How to Develop a Logic Model
Developing a logic model requires a program planner to think
systematically about what they
want their program to accomplish and how it will be done. The
logic model should illustrate the
linkages of among the elements of the program including the
goal, objectives, resources,
activities, process measures, outcomes, outcome measures, and
external factors.
Logic Model Schematic
The following logic model format and discussion was developed
by the Juvenile Justice
Evaluation Center (JJEC) and maintained online by the Justice
Research and Statistics
Association (www.jrsa.org) from 1998 to 2007.
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/eval
uation_strategies_p3_7.html
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop
1-4.html#chap2
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/cdc-
logic-model-development.pdf
http://www.uidaho.edu/extension/LogicModel.pdf
The following discussion explains the interconnectedness
among the elements of the logic
model.
At the top of the logic model example is a goal which represents
a broad, measurable statement
that describing the desired long-term impact of the program.
Knowing the expected long-term
achievements a program is expected to make will help in
determining what the overall program
goal should be. Sometimes goals are not always achieved during
the operation of a program.
However, evaluators or program planners should continually re-
visit the program's goals during
program planning.
An objective is a more specific, measurable concept focused on
the immediate or direct outcomes
of the program that support accomplishment of your goal.
Unlike goals, objectives should be
achieved during the program. A clear objective will provide
information concerning the
direction, target, and timeframe for the program. Knowing the
difference your program will
make, who will be impacted, and when will be helpful when
developing focused objectives for
your program.
Resources or inputs can include staff, facilities, materials, or
funds, etc--anything invested in the
program to accomplish the work that must be done. The
resources needed to conduct a program
should be articulated during the early stages of program
development to insure that a program is
realistically implemented and capable of meeting its stated
goal(s).
Activities represent efforts conducted to achieve the program
objectives. After considering the
resources a program will need, the specific activities that will
be used to bring about the intended
changes or results must be determined.
Process Measures are data used to demonstrate the
implementation of activities. These include
products of activities and indicators of services provided.
Process measures provide
documentation of whether a program is being implemented as
originally intended. For example,
process measures for a mental health court program might
include the number of treatment
contacts or the type of treatment received.
Outcome measures represent the actual change(s) or lack thereof
in the target (e.g., clients or
system) of the program that are directly related to the goal(s)
and objectives. Outcomes may
include intended or unintended consequences. Three levels of
outcomes to consider include:
Initial outcomes: Immediate results of a program.
Intermediate outcomes: The results following initial outcomes.
Long Term: The ultimate impact of a program.
External Factors, located at the bottom of the logic model
example, are factors within the
system that may affect program operation. External factors vary
according to program setting
and may include influences such as development of or revisions
to state/federal laws, unexpected
changes in data sharing procedures, or other similar
simultaneously running programs. It is
important to think about external factors that might change how
your program operates or affect
program outcomes. External factors should be included during
the development of the logic
model so that they can be taken into account when assessing
program operations or when
interpreting the absence or presence of program changes.
If-Then Logic Model
Another way to develop a logic model is by using an "if-then"
sequence that indicates how each
component relates to each other. Conceptually, the if-then logic
model works like this:
IF [program activity] THEN [program objective] and IF
[program objective] THEN [program
goal].
In reality, the if-then logic model looks like this:
IF a truancy reduction program is offered to youth who have
been truant from school THEN their
school attendance will increase and IF their school attendance is
increased THEN their
graduation rates will increase.
Another way to conceptualize the "if-then" format:
• If the required resources are invested, then those resources can
be used to conduct the
program activities.
• If the activities are completed, then the desired outputs for the
target population will be
produced.
• If the outputs are produced, then the outcomes will indicate
that the objectives of the
program have been accomplished.
Developing program logic using an "if-then" sequence can help
a program manager or evaluator
maintain focus and direction for the project and help specify
what will be measured through the
evaluation.
Common Problems When Developing Logic Models
• Links among elements (e.g., objectives, activities, outcome
measures) of the logic model
are unclear or missing.
It should be obvious which objective is tied to which activity,
process measure, etc. Oftentimes
logic models contain lists of each of the elements of a logic
model without specifying which item
on one list is related to which item on another list. This can
easily lead to confusion regarding the
relationship among elements or result in accidental omission of
an item on a list of elements.
• Too much (or too little) information is provided on the logic
model.
The logic model should include only the primary elements
related to program/project design and
operation. As a general rule, it should provide the "big picture"
of the program/project and avoid
providing very specific details related to how, for example,
interventions will occur, or a list of
all the agencies that will serve to improve collaboration efforts.
If you feel that a model with all
those details is necessary, consider developing two models; a
model with the fundamental
elements and a model with the details.
• Objectives are confused with activities.
Make sure that items listed as objectives are in fact objectives
rather than activities. Anything
related to program implementation or a task that is being carried
out in order to accomplish
something is an activity rather than an objective. For example,
'hire 10 staff members' is an
activity that is being carried out in order to accomplish an
objective such as 'improve response
time for incoming phone calls.'
• Objectives are not measurable.
Unlike goals which are not considered measurable because they
are broad, mission-like
statements, objectives should be measurable and directly related
to the accomplishment of the
goal. An objective is measurable when it specifically identifies
the target (who or what will be
affected), is time-oriented (when it will be accomplished), and
indicates direction of desired
change. In many cases, measurable objectives also include the
amount of change desired.
Other Logic Model Examples
Phoenix Gang Logic Model
OJJDP Generic Logic Model
United Way Program Outcome Model
University of Missouri Extension Program Planning and
Development Logic Model
Learn More About How to Develop a Logic
Model
Developing a Basic Logic Model for Your Program (The
University of Arizona School of Public
Health)
Enhancing Performance with Logic Models (University of
Wisconsin-Extension, Division of
Cooperative Extension)
Establishing Goals, Objectives and Evaluation Criteria (U.S.
Department of Housing and Urban
Development)
Using the Logic Model for Program Planning (Legal Service
Corporation Resource Information)
Assessing Program Performance
Identifying Goals and Objectives
Programs must have clearly specified goals and objectives
before an evaluation can take place. A
program goal is a broad statement of what the program hopes to
accomplish or what changes it
expects to produce. Examples of program goal statements
include:
• Reduce reoffending among substance abusing offenders served
by the program
• Reduce the crime rate in the neighborhood targeted by the
program
• Restore a sense of well-being to victims of crime
An objective is a specific and measurable condition that must be
attained in order to accomplish
a particular program goal. There are many different ways to
specify objectives; the program and
http://www.newfreedomprograms.com/download/gp_logic_mode
l.pdf
http://www.ojjdp.ncjrs.gov/grantees/pm/generic_logic_model.pd
f
http://national.unitedway.org/outcomes/resources/mpo/model.cf
m
http://outreach.missouri.edu/staff/programdev/plm/
http://www.publichealth.arizona.edu/chwtoolkit/PDFs/Logicmod
/chapter2.pdf
http://www1.uwex.edu/ces/lmcourse/
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/task
_4.htm
http://www.lri.lsc.gov/pdf/other/TIG_Conf._Materials/EMcKay
_Logic_Model_Intro_LSC.pdf
evaluator should choose the method that works best for each
situation. Examples of program
objectives include:
• Assist substance abusing offenders in abstaining from drug use
• Ensure that victims of crime feel compensated for their losses
• Improve by one grade level reading scores for 80% of the
juveniles who complete the
program
Learn More...
Establishing Evaluation Criteria (U.S. Department of Housing
and Urban Development)
The Logic of Evaluation (Office of Juvenile Justice and
Delinquency Prevention)
Measuring Performance When There is No Bottom Line (Bureau
of Justice Assistance)
The Problem of Defining Agency Success (Bureau of Justice
Assistance)
State your Program Objectives in Measurable Terms (U.S.
Department of Housing and Urban
Development)
What You Expect: Building A Theory of Action (National
Institute of Justice, Chapter 2)
Measuring Activities and Outputs: Process
Evaluation
Once a program has identified its goals and objectives, it needs
to specify the major activities or
processes that it will undertake that will lead to accomplishing
these goals and objectives. One
component of measuring a program's performance is to
determine whether activities were
actually implemented as planned. The reason that this is
important is that if activities are not
implemented as planned, then there is no reason to believe that
the activities as they were
implemented will produce the desired objectives.
The immediate results of activities are referred to as outputs.
Output measures are indicators of
the degree to which activities were implemented as planned.
Examples of output measures
include:
• Number of offenders receiving counseling services
• Number of community service projects completed
• Proportion of parolees who receive drug tests
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/step
_4.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentg.html#developing-an-effective
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/mea
suring_performance_when_there.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/prob
lem_of_defining_agency_succe.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/step
_3.htm
http://www.ncjrs.org/pdffiles1/nij/187350.pdf
Process evaluation focuses on program implementation. Process
evaluations generally involve:
reviewing program documents, interviewing program staff,
observing program operations, and
collecting data from program files. In addition to collecting data
on output measures, process
evaluations examine a number of additional questions; for
example:
• How well were key program elements, such as multiagency
collaboration, implemented?
• Did the program serve its target group (for example, high risk
probationers)?
• What was the dropout rate for the program, and how can this
rate be reduced?
Learn More...
Implement a Process Evaluation to Document What is Done,
When, by Whom,
To Whom (National Institute of Justice)
Measurement Issues (Office of Juvenile Justice and
Delinquency Prevention)
Process Analysis (The Urban Institute)
Program Implementation (General Accounting Office)
Measuring Outcomes: Impact Evaluation
Another component of measuring a program's performance is
determining whether the activities
produced the desired effects or outcomes or, put another way,
whether the program achieved its
objectives. Measuring outcomes tells the program and the
evaluator what impacts the program
has had or what results it has achieved. Such impacts are
usually expressed in terms of behavior
change in those served by the program: reducing reoffending or
increasing knowledge about the
negative consequences of substance abuse. Outcomes may be
divided into short-term,
intermediate, and long-term outcomes, with the last usually
being the program goal.
There are a number of different ways to define and measure any
particular outcome. The choice
of a measurement method is critical to the program assessment
process. A professional evaluator
can be useful in helping to develop and identify valid and
reliable outcome measures.
Learn More...
Basic Outcome Evaluation and Comparative Outcome
Evaluation (Office of Juvenile Justice and
Delinquency Prevention)
Measuring Program Outcomes (Office of Juvenile Justice and
Delinquency Prevention)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impl
ement_a_process.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impl
ement_a_process.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/mea
surement_issues.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop
5-9.html#process_analysis
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentee.html#we-frequently-are
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi
c_outcome_evaluation.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentg.html#measuring-outcomes
Varieties of Outcome Measures (Office of Juvenile Justice and
Delinquency Prevention)
Establishing the "Activities-Outcomes"
Connection: Evaluation Experiments
Performance measurement can and should assess program
outcomes. However, in order to
establish the connection between a program's activities and
observed outcomes, an impact
evaluation, in the form of an experiment or randomized
controlled trial (RCT), is necessary. The
RCT involves assigning individuals randomly to participate in
the program, then comparing
outcomes for program participants and non-participants. While
in theory all programs should be
evaluated using RCTs, practical considerations limit their use in
many situations. In order to
illustrate the advantages and disadvantages of evaluation
experiments, three common evaluation
designs are reviewed:
• Pre-experimental (pre-post) design
• Quasi-experimental (comparison group) design
• Experimental (control group) design (randomized controlled
trial)
Learn More...
Allocate Sufficient Funds for an Impact Evaluation: If
Controlled Experimentation is Infeasible,
Approach Less Rigorous Designs with Caution and Imagination
(National Institute of Justice)
Impact Evaluation Designs and The Impact Evaluation Design
'Decision Tree' (The Urban
Institute)
Methods of Analyzing Data (National Institute of Justice)
Observing Behavioral Outcomes and Attributing Changes to the
Program (U.S. Department of
Education)
Establishing the "Activities-Outcomes" Connection: Evaluation
Experiments
Quasi-Experimental (Comparison Group) Design
In this design, change is assessed by comparing perceptions or
behaviors of program participants
with those of non-participants (comparison group). If outcomes
for the two groups differ in the
expected way (e.g., program participants have lower recidivism
rates than non-participants), then
the evaluator assumes that the difference was caused by the
program.
The assumption here is that the program participants are exactly
like the non-participants in
every way except that they received the program services, so
any differences between the two
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/vari
eties_of_outcome_measures.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/allo
cate_sufficient_funds.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/allo
cate_sufficient_funds.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop
5-9.html#impact_evaluation_designs
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop
5-9.html#decisiontree
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap
ter_3_nij_guide.htm
http://www.ed.gov/offices/OUS/PES/primer5.html
must be due to the program. In such designs, evaluators often
select non-participants who match
participants on key factors, such as age, gender, and criminal
history.
The trouble with this design, however, is that the evaluator can
never be certain that the groups
are exactly the same on every factor that might lead to
differences in observed outcomes. The
evaluator can have more confidence in the results of a quasi-
experiment than he or she can in the
results of the pre-post design, but still cannot be certain that the
program activities caused the
observed outcomes.
Learn More...
The Nonequivalent Comparison Group Design (Government
Accounting Office)
Non-Random Comparison Group (National Institute of Justice,
pp. 4.5-4.6)
Establishing the "Activities-Outcomes"
Connection: Evaluation Experiments
Pre-Experimental (Pre-Post) Design
The pre-post design measures program outcomes by comparing
perceptions or behaviors at the
end of the program (post) to some baseline, usually the same
elements measured at prior to the
start of the program (pre). If program participants change in the
expected direction, then the
outcomes are said to have been achieved.
The difficulty with this design is that it is not possible to
attribute any observed changes to the
program itself, as opposed to other factors that might have
produced the changes. In other words,
it is impossible to conclude that the program activities caused
the observed outcomes.
Learn More...
The Before-and-After Design (General Accounting Office)
Pre- and Post-Test Scores (National Institute of Justice, p. 4.8)
Threats to Validity (Office of Juvenile Justice and Delinquency
Prevention)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentbb.html#before-and-after
http://www.ncjrs.org/pdffiles1/nij/187350.pdf
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentg.html#evaluation-design
Establishing the "Activities-Outcomes"
Connection: Evaluation Experiments
Quasi-Experimental (Comparison Group)
Design
In this design, change is assessed by comparing perceptions or
behaviors of program participants
with those of non-participants (comparison group). If outcomes
for the two groups differ in the
expected way (e.g., program participants have lower recidivism
rates than non-participants), then
the evaluator assumes that the difference was caused by the
program.
The assumption here is that the program participants are exactly
like the non-participants in
every way except that they received the program services, so
any differences between the two
must be due to the program. In such designs, evaluators often
select non-participants who match
participants on key factors, such as age, gender, and criminal
history.
The trouble with this design, however, is that the evaluator can
never be certain that the groups
are exactly the same on every factor that might lead to
differences in observed outcomes. The
evaluator can have more confidence in the results of a quasi-
experiment than he or she can in the
results of the pre-post design, but still cannot be certain that the
program activities caused the
observed outcomes.
Learn More...
The Nonequivalent Comparison Group Design (Government
Accounting Office)
Non-Random Comparison Group (National Institute of Justice,
pp. 4.5-4.6)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentbb.html#nonequivalent-comparison
http://www.ncjrs.org/pdffiles1/nij/187350.pdf
Establishing the "Activities-Outcomes"
Connection: Evaluation Experiments
Experimental (Control Group) Design
(Randomized Controlled Trial)
As in the quasi-experiment, a randomized controlled trial (RCT)
involves comparing program
participants and non-participants. In order to ensure
equivalence, the RCT involves randomly
assigning participants to groups. This means that which
offenders receive program services and
which do not is decided not by a judge or other criminal justice
administrator, but by the
evaluator. This random assignment procedure is the best way of
ensuring that there are no
differences between program participants and non-participants
except for the program services
provided to the former group.
This design, however, cannot always be employed to assess
criminal justice initiatives. For some
initiatives, like community-wide efforts and multijurisdictional
law enforcement drug task
forces, assigning cases randomly is not feasible. In other cases,
judges and other criminal justice
administrators may refuse to surrender their discretion in the
interests of sound evaluation
practice.
Learn More...
Random Assignment (National Institute of Justice, pp. 4.3-4.4)
The True Experiment (General Accounting Office)
Use of Random Assignment (Office of Juvenile Justice and
Delinquency Prevention)
Data Collection
New or Existing Data?
Most programs collect some information that is potentially
useful for evaluation. At the outset,
the evaluation needs to assess what data already exist, what the
quality of the data are, and
whether they are readily available in a useable form. The
answers to these questions will help to
determine whether existing data can be used, or whether new
data must be collected.
http://www.ncjrs.org/pdffiles1/nij/187350.pdf
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
mentbb.html#true-experiment
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/use_
of_random_assignment.htm
When planning an evaluation, the evaluator must determine
whether existing or new data will be
used in data analysis. The advantage of using new data is the
greater control an evaluator has
over the measures, procedures, and data collection staff, which
can contribute to greater
reliability and validity of the data. Using existing data has the
advantage of cost savings, because
time, effort, and other resources are not spent on collecting new
data.
Learn More...
Data Collection (U.S. Department of Housing and Urban
Development)
How Do You Get the Information You Need for Your
Evaluation? (U.S. Department of Housing
and Urban Development)
Obtaining Information for Evaluations - Use Existing Data or
Collect New Information?
(National Institute of Justice)
Using Existing Data
Sometimes evaluators are able to use information that already
exists without going through the
expensive and time-consuming process of collecting new data.
Information collected by the
program for a variety of purposes may have value for
performance measurement and evaluation.
Evaluators can often make relatively small changes in the
program's practices and procedures
that will result in data that can be more readily used for
evaluation. Examples of existing data on
program participants that might be able to be used for
evaluation include:
• Attendance records
• Counseling forms and progress notes
• Discharge summaries
• Presentence investigation reports
• Psychological testing and other classification information
Learn More...
Ensuring That Evaluations Yield Valid and Reliable Findings
(U.S. Department of Education)
Verifying the Accuracy of the Data (U.S. Department of
Housing and Urban Development)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/task
_6.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap
ter_6_how_do_you_get.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap
ter_2_nij_guide.htm
http://www.ed.gov/offices/OUS/PES/primer6.html
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/veri
fying.htm
Using New Data
Even if some evaluation data are currently collected, they will
often need to be supplemented by
the collection of additional data. These new data can be
collected through various strategies:
Direct Observation
Obtaining data by on-site observation has the advantage of
providing an opportunity to
learn in detail how the project works, the context in which it
exists, and what its various
consequences are. However, this type of data collection can be
expensive and time-
consuming. Observations conducted by program staff, as
opposed to an outside evaluator,
may also suffer from subjectivity.
Interviews
Interviews are an effective way of obtaining information about
the perceptions of
program staff and clients. An external evaluator will often
conduct interviews with
program managers, staff members, and clients to obtain their
perceptions of how well the
program functions. A disadvantage to conducting interviews is
that they can be time-
consuming and costly, and produce subjective information.
Surveys and Questionnaires
Surveys and questionnaires can provide information on program
staff members'
perceptions of program operations and their own functions.
Surveys of clients can
provide information on attitudes, beliefs, and self-reported
behaviors. An important
benefit of surveys is that they provide anonymity to
respondents, which can reduce the
likelihood of biased reporting and increase data validity. A
variety of issues are
associated with the use of surveys and questionnaires, including
reading level, cultural
bias, and sensitivity to particular wording.
Official Records
Official records and files are one of the most common sources
of data for criminal justice
evaluations. Arrest reports, court files, and prison records all
contain much useful
information for assessing program outcomes. Often these files
are automated, making
accessing these data easier and less expensive.
Learn More...
Basic Guidelines for the Development of Survey Items (Office
of Juvenile Justice and
Delinquency Prevention)
Data Collection Strategies (The Urban Institute)
Developing and Using Questionnaires (General Accounting
Office)
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi
c_guidelines_for_the_develop.htm
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop
5-9.html#data_collection_strategies
http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu
A pilot evaluation of the Family Caregiver Support Program.docx
A pilot evaluation of the Family Caregiver Support Program.docx
A pilot evaluation of the Family Caregiver Support Program.docx
A pilot evaluation of the Family Caregiver Support Program.docx
A pilot evaluation of the Family Caregiver Support Program.docx
A pilot evaluation of the Family Caregiver Support Program.docx
A pilot evaluation of the Family Caregiver Support Program.docx
A pilot evaluation of the Family Caregiver Support Program.docx

More Related Content

Similar to A pilot evaluation of the Family Caregiver Support Program.docx

Evidence-based intervention and services for high-riskyouth
Evidence-based intervention and services for high-riskyouthEvidence-based intervention and services for high-riskyouth
Evidence-based intervention and services for high-riskyouthBetseyCalderon89
 
An analysis of bachelor of science nursing students’ attitudes on
An analysis of bachelor of science nursing students’ attitudes onAn analysis of bachelor of science nursing students’ attitudes on
An analysis of bachelor of science nursing students’ attitudes onAlexander Decker
 
Exploring Adventure Therapy as an Early Intervention for Struggling Adolescents
Exploring Adventure Therapy as an Early Intervention for Struggling AdolescentsExploring Adventure Therapy as an Early Intervention for Struggling Adolescents
Exploring Adventure Therapy as an Early Intervention for Struggling AdolescentsWill Dobud
 
Running head LOGIC MODELLOGIC MODEL 2Logic modelStu.docx
Running head LOGIC MODELLOGIC MODEL 2Logic modelStu.docxRunning head LOGIC MODELLOGIC MODEL 2Logic modelStu.docx
Running head LOGIC MODELLOGIC MODEL 2Logic modelStu.docxwlynn1
 
Unit2 SPPHS5006 Due 10.18.2022ReadingsUse your .docx
Unit2 SPPHS5006 Due 10.18.2022ReadingsUse your            .docxUnit2 SPPHS5006 Due 10.18.2022ReadingsUse your            .docx
Unit2 SPPHS5006 Due 10.18.2022ReadingsUse your .docxjolleybendicty
 
Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docx
Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docxAssignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docx
Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docxjesuslightbody
 
FIRST ASSIGNMENT1FIRST ASSIGNMENT2.docx
FIRST ASSIGNMENT1FIRST ASSIGNMENT2.docxFIRST ASSIGNMENT1FIRST ASSIGNMENT2.docx
FIRST ASSIGNMENT1FIRST ASSIGNMENT2.docxbryanwest16882
 
A Corporate Wellness Program And Nursing Home Employees Health
A Corporate Wellness Program And Nursing Home Employees  HealthA Corporate Wellness Program And Nursing Home Employees  Health
A Corporate Wellness Program And Nursing Home Employees HealthValerie Felton
 
New York State Drug Court Program
New York State Drug Court ProgramNew York State Drug Court Program
New York State Drug Court ProgramErikaAGoyer
 
Improving practice through evidence not only helps lower healthcare improve.docx
Improving practice through evidence not only helps lower healthcare improve.docxImproving practice through evidence not only helps lower healthcare improve.docx
Improving practice through evidence not only helps lower healthcare improve.docxwrite4
 
Child Outcome Rating Scale (CORS)
Child Outcome Rating Scale (CORS)Child Outcome Rating Scale (CORS)
Child Outcome Rating Scale (CORS)Barry Duncan
 
TeachingAccountability
TeachingAccountabilityTeachingAccountability
TeachingAccountabilityBarry Duncan
 
RESEARCH ARTICLEA Systematic Review of Interventions toC
RESEARCH ARTICLEA Systematic Review of Interventions toCRESEARCH ARTICLEA Systematic Review of Interventions toC
RESEARCH ARTICLEA Systematic Review of Interventions toCanitramcroberts
 
RESEARCH ARTICLEA Systematic Review of Interventions toC.docx
RESEARCH ARTICLEA Systematic Review of Interventions toC.docxRESEARCH ARTICLEA Systematic Review of Interventions toC.docx
RESEARCH ARTICLEA Systematic Review of Interventions toC.docxrgladys1
 
PSY 550 Response Paper RubricRequirements of submission Respon.docx
PSY 550 Response Paper RubricRequirements of submission  Respon.docxPSY 550 Response Paper RubricRequirements of submission  Respon.docx
PSY 550 Response Paper RubricRequirements of submission Respon.docxamrit47
 
Pick one of the following terms for your research Morals, prin.docx
Pick one of the following terms for your research Morals, prin.docxPick one of the following terms for your research Morals, prin.docx
Pick one of the following terms for your research Morals, prin.docxkarlhennesey
 
Active-Duty Physicians Perceptions And Satisfaction With Humanitarian Assist...
Active-Duty Physicians  Perceptions And Satisfaction With Humanitarian Assist...Active-Duty Physicians  Perceptions And Satisfaction With Humanitarian Assist...
Active-Duty Physicians Perceptions And Satisfaction With Humanitarian Assist...Cynthia Velynne
 
Segal (2012) theory! the missing link
Segal (2012) theory! the missing linkSegal (2012) theory! the missing link
Segal (2012) theory! the missing linkLucieCluver
 
Competency 2 AssessmentInstructions This Competency Assessm
Competency 2 AssessmentInstructions This Competency AssessmCompetency 2 AssessmentInstructions This Competency Assessm
Competency 2 AssessmentInstructions This Competency AssessmChantellPantoja184
 

Similar to A pilot evaluation of the Family Caregiver Support Program.docx (20)

Evidence-based intervention and services for high-riskyouth
Evidence-based intervention and services for high-riskyouthEvidence-based intervention and services for high-riskyouth
Evidence-based intervention and services for high-riskyouth
 
An analysis of bachelor of science nursing students’ attitudes on
An analysis of bachelor of science nursing students’ attitudes onAn analysis of bachelor of science nursing students’ attitudes on
An analysis of bachelor of science nursing students’ attitudes on
 
Exploring Adventure Therapy as an Early Intervention for Struggling Adolescents
Exploring Adventure Therapy as an Early Intervention for Struggling AdolescentsExploring Adventure Therapy as an Early Intervention for Struggling Adolescents
Exploring Adventure Therapy as an Early Intervention for Struggling Adolescents
 
Running head LOGIC MODELLOGIC MODEL 2Logic modelStu.docx
Running head LOGIC MODELLOGIC MODEL 2Logic modelStu.docxRunning head LOGIC MODELLOGIC MODEL 2Logic modelStu.docx
Running head LOGIC MODELLOGIC MODEL 2Logic modelStu.docx
 
Unit2 SPPHS5006 Due 10.18.2022ReadingsUse your .docx
Unit2 SPPHS5006 Due 10.18.2022ReadingsUse your            .docxUnit2 SPPHS5006 Due 10.18.2022ReadingsUse your            .docx
Unit2 SPPHS5006 Due 10.18.2022ReadingsUse your .docx
 
Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docx
Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docxAssignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docx
Assignment WK 9Assessing a Healthcare ProgramPolicy Evaluation.docx
 
FIRST ASSIGNMENT1FIRST ASSIGNMENT2.docx
FIRST ASSIGNMENT1FIRST ASSIGNMENT2.docxFIRST ASSIGNMENT1FIRST ASSIGNMENT2.docx
FIRST ASSIGNMENT1FIRST ASSIGNMENT2.docx
 
A Corporate Wellness Program And Nursing Home Employees Health
A Corporate Wellness Program And Nursing Home Employees  HealthA Corporate Wellness Program And Nursing Home Employees  Health
A Corporate Wellness Program And Nursing Home Employees Health
 
New York State Drug Court Program
New York State Drug Court ProgramNew York State Drug Court Program
New York State Drug Court Program
 
Improving practice through evidence not only helps lower healthcare improve.docx
Improving practice through evidence not only helps lower healthcare improve.docxImproving practice through evidence not only helps lower healthcare improve.docx
Improving practice through evidence not only helps lower healthcare improve.docx
 
Child Outcome Rating Scale (CORS)
Child Outcome Rating Scale (CORS)Child Outcome Rating Scale (CORS)
Child Outcome Rating Scale (CORS)
 
TeachingAccountability
TeachingAccountabilityTeachingAccountability
TeachingAccountability
 
RESEARCH ARTICLEA Systematic Review of Interventions toC
RESEARCH ARTICLEA Systematic Review of Interventions toCRESEARCH ARTICLEA Systematic Review of Interventions toC
RESEARCH ARTICLEA Systematic Review of Interventions toC
 
RESEARCH ARTICLEA Systematic Review of Interventions toC.docx
RESEARCH ARTICLEA Systematic Review of Interventions toC.docxRESEARCH ARTICLEA Systematic Review of Interventions toC.docx
RESEARCH ARTICLEA Systematic Review of Interventions toC.docx
 
PSY 550 Response Paper RubricRequirements of submission Respon.docx
PSY 550 Response Paper RubricRequirements of submission  Respon.docxPSY 550 Response Paper RubricRequirements of submission  Respon.docx
PSY 550 Response Paper RubricRequirements of submission Respon.docx
 
Pick one of the following terms for your research Morals, prin.docx
Pick one of the following terms for your research Morals, prin.docxPick one of the following terms for your research Morals, prin.docx
Pick one of the following terms for your research Morals, prin.docx
 
Active-Duty Physicians Perceptions And Satisfaction With Humanitarian Assist...
Active-Duty Physicians  Perceptions And Satisfaction With Humanitarian Assist...Active-Duty Physicians  Perceptions And Satisfaction With Humanitarian Assist...
Active-Duty Physicians Perceptions And Satisfaction With Humanitarian Assist...
 
Segal (2012) theory! the missing link
Segal (2012) theory! the missing linkSegal (2012) theory! the missing link
Segal (2012) theory! the missing link
 
AGHE2016v2
AGHE2016v2AGHE2016v2
AGHE2016v2
 
Competency 2 AssessmentInstructions This Competency Assessm
Competency 2 AssessmentInstructions This Competency AssessmCompetency 2 AssessmentInstructions This Competency Assessm
Competency 2 AssessmentInstructions This Competency Assessm
 

More from blondellchancy

1. Report contentThe report should demonstrate your understa.docx
1. Report contentThe report should demonstrate your understa.docx1. Report contentThe report should demonstrate your understa.docx
1. Report contentThe report should demonstrate your understa.docxblondellchancy
 
1. Research the assessment process for ELL students in your state. W.docx
1. Research the assessment process for ELL students in your state. W.docx1. Research the assessment process for ELL students in your state. W.docx
1. Research the assessment process for ELL students in your state. W.docxblondellchancy
 
1. Reply:2.Reply:.docx
1. Reply:2.Reply:.docx1. Reply:2.Reply:.docx
1. Reply:2.Reply:.docxblondellchancy
 
1. Review the three articles about Inflation that are of any choice..docx
1. Review the three articles about Inflation that are of any choice..docx1. Review the three articles about Inflation that are of any choice..docx
1. Review the three articles about Inflation that are of any choice..docxblondellchancy
 
1. Read the RiskReport to see what requirements are.2. Read the .docx
1. Read the RiskReport to see what requirements are.2. Read the .docx1. Read the RiskReport to see what requirements are.2. Read the .docx
1. Read the RiskReport to see what requirements are.2. Read the .docxblondellchancy
 
1. Quantitative According to the scoring criteria for the BAI, .docx
1. Quantitative According to the scoring criteria for the BAI, .docx1. Quantitative According to the scoring criteria for the BAI, .docx
1. Quantitative According to the scoring criteria for the BAI, .docxblondellchancy
 
1. Prof. Lennart Van der Zeil’s theorem says that any programmin.docx
1. Prof. Lennart Van der Zeil’s theorem says that any programmin.docx1. Prof. Lennart Van der Zeil’s theorem says that any programmin.docx
1. Prof. Lennart Van der Zeil’s theorem says that any programmin.docxblondellchancy
 
1. Review the results of your assessment using the explanation.docx
1. Review the results of your assessment using the explanation.docx1. Review the results of your assessment using the explanation.docx
1. Review the results of your assessment using the explanation.docxblondellchancy
 
1. Search the internet and learn about the cases of nurses Julie.docx
1. Search the internet and learn about the cases of nurses Julie.docx1. Search the internet and learn about the cases of nurses Julie.docx
1. Search the internet and learn about the cases of nurses Julie.docxblondellchancy
 
1. Qualitative or quantitative paperresearch required(Use stati.docx
1. Qualitative or quantitative paperresearch required(Use stati.docx1. Qualitative or quantitative paperresearch required(Use stati.docx
1. Qualitative or quantitative paperresearch required(Use stati.docxblondellchancy
 
1. Prepare a one page paper on associative analysis. You may researc.docx
1. Prepare a one page paper on associative analysis. You may researc.docx1. Prepare a one page paper on associative analysis. You may researc.docx
1. Prepare a one page paper on associative analysis. You may researc.docxblondellchancy
 
1. Prepare a comparative table in which you contrast the charact.docx
1. Prepare a comparative table in which you contrast the charact.docx1. Prepare a comparative table in which you contrast the charact.docx
1. Prepare a comparative table in which you contrast the charact.docxblondellchancy
 
1. Portfolio part II a) APRN protocol also known as collab.docx
1. Portfolio part II a) APRN protocol also known as collab.docx1. Portfolio part II a) APRN protocol also known as collab.docx
1. Portfolio part II a) APRN protocol also known as collab.docxblondellchancy
 
1. Post the link to one news article, preferably a piece of rece.docx
1. Post the link to one news article, preferably a piece of rece.docx1. Post the link to one news article, preferably a piece of rece.docx
1. Post the link to one news article, preferably a piece of rece.docxblondellchancy
 
1. Please explain fixed and flexible budgeting. Provide an examp.docx
1. Please explain fixed and flexible budgeting. Provide an examp.docx1. Please explain fixed and flexible budgeting. Provide an examp.docx
1. Please explain fixed and flexible budgeting. Provide an examp.docxblondellchancy
 
1. Open and print the Week 6 Assignment.2. The assignment .docx
1. Open and print the Week 6 Assignment.2. The assignment .docx1. Open and print the Week 6 Assignment.2. The assignment .docx
1. Open and print the Week 6 Assignment.2. The assignment .docxblondellchancy
 
1. Plato’s Republic takes as its point of departure the question of .docx
1. Plato’s Republic takes as its point of departure the question of .docx1. Plato’s Republic takes as its point of departure the question of .docx
1. Plato’s Republic takes as its point of departure the question of .docxblondellchancy
 
1. Objective Learn why and how to develop a plan that encompasses a.docx
1. Objective Learn why and how to develop a plan that encompasses a.docx1. Objective Learn why and how to develop a plan that encompasses a.docx
1. Objective Learn why and how to develop a plan that encompasses a.docxblondellchancy
 
1. Open the attached Excel Assignment.xlsx” file and name it LastN.docx
1. Open the attached Excel Assignment.xlsx” file and name it LastN.docx1. Open the attached Excel Assignment.xlsx” file and name it LastN.docx
1. Open the attached Excel Assignment.xlsx” file and name it LastN.docxblondellchancy
 
1. must be a research article from either pubmed or google scholar..docx
1. must be a research article from either pubmed or google scholar..docx1. must be a research article from either pubmed or google scholar..docx
1. must be a research article from either pubmed or google scholar..docxblondellchancy
 

More from blondellchancy (20)

1. Report contentThe report should demonstrate your understa.docx
1. Report contentThe report should demonstrate your understa.docx1. Report contentThe report should demonstrate your understa.docx
1. Report contentThe report should demonstrate your understa.docx
 
1. Research the assessment process for ELL students in your state. W.docx
1. Research the assessment process for ELL students in your state. W.docx1. Research the assessment process for ELL students in your state. W.docx
1. Research the assessment process for ELL students in your state. W.docx
 
1. Reply:2.Reply:.docx
1. Reply:2.Reply:.docx1. Reply:2.Reply:.docx
1. Reply:2.Reply:.docx
 
1. Review the three articles about Inflation that are of any choice..docx
1. Review the three articles about Inflation that are of any choice..docx1. Review the three articles about Inflation that are of any choice..docx
1. Review the three articles about Inflation that are of any choice..docx
 
1. Read the RiskReport to see what requirements are.2. Read the .docx
1. Read the RiskReport to see what requirements are.2. Read the .docx1. Read the RiskReport to see what requirements are.2. Read the .docx
1. Read the RiskReport to see what requirements are.2. Read the .docx
 
1. Quantitative According to the scoring criteria for the BAI, .docx
1. Quantitative According to the scoring criteria for the BAI, .docx1. Quantitative According to the scoring criteria for the BAI, .docx
1. Quantitative According to the scoring criteria for the BAI, .docx
 
1. Prof. Lennart Van der Zeil’s theorem says that any programmin.docx
1. Prof. Lennart Van der Zeil’s theorem says that any programmin.docx1. Prof. Lennart Van der Zeil’s theorem says that any programmin.docx
1. Prof. Lennart Van der Zeil’s theorem says that any programmin.docx
 
1. Review the results of your assessment using the explanation.docx
1. Review the results of your assessment using the explanation.docx1. Review the results of your assessment using the explanation.docx
1. Review the results of your assessment using the explanation.docx
 
1. Search the internet and learn about the cases of nurses Julie.docx
1. Search the internet and learn about the cases of nurses Julie.docx1. Search the internet and learn about the cases of nurses Julie.docx
1. Search the internet and learn about the cases of nurses Julie.docx
 
1. Qualitative or quantitative paperresearch required(Use stati.docx
1. Qualitative or quantitative paperresearch required(Use stati.docx1. Qualitative or quantitative paperresearch required(Use stati.docx
1. Qualitative or quantitative paperresearch required(Use stati.docx
 
1. Prepare a one page paper on associative analysis. You may researc.docx
1. Prepare a one page paper on associative analysis. You may researc.docx1. Prepare a one page paper on associative analysis. You may researc.docx
1. Prepare a one page paper on associative analysis. You may researc.docx
 
1. Prepare a comparative table in which you contrast the charact.docx
1. Prepare a comparative table in which you contrast the charact.docx1. Prepare a comparative table in which you contrast the charact.docx
1. Prepare a comparative table in which you contrast the charact.docx
 
1. Portfolio part II a) APRN protocol also known as collab.docx
1. Portfolio part II a) APRN protocol also known as collab.docx1. Portfolio part II a) APRN protocol also known as collab.docx
1. Portfolio part II a) APRN protocol also known as collab.docx
 
1. Post the link to one news article, preferably a piece of rece.docx
1. Post the link to one news article, preferably a piece of rece.docx1. Post the link to one news article, preferably a piece of rece.docx
1. Post the link to one news article, preferably a piece of rece.docx
 
1. Please explain fixed and flexible budgeting. Provide an examp.docx
1. Please explain fixed and flexible budgeting. Provide an examp.docx1. Please explain fixed and flexible budgeting. Provide an examp.docx
1. Please explain fixed and flexible budgeting. Provide an examp.docx
 
1. Open and print the Week 6 Assignment.2. The assignment .docx
1. Open and print the Week 6 Assignment.2. The assignment .docx1. Open and print the Week 6 Assignment.2. The assignment .docx
1. Open and print the Week 6 Assignment.2. The assignment .docx
 
1. Plato’s Republic takes as its point of departure the question of .docx
1. Plato’s Republic takes as its point of departure the question of .docx1. Plato’s Republic takes as its point of departure the question of .docx
1. Plato’s Republic takes as its point of departure the question of .docx
 
1. Objective Learn why and how to develop a plan that encompasses a.docx
1. Objective Learn why and how to develop a plan that encompasses a.docx1. Objective Learn why and how to develop a plan that encompasses a.docx
1. Objective Learn why and how to develop a plan that encompasses a.docx
 
1. Open the attached Excel Assignment.xlsx” file and name it LastN.docx
1. Open the attached Excel Assignment.xlsx” file and name it LastN.docx1. Open the attached Excel Assignment.xlsx” file and name it LastN.docx
1. Open the attached Excel Assignment.xlsx” file and name it LastN.docx
 
1. must be a research article from either pubmed or google scholar..docx
1. must be a research article from either pubmed or google scholar..docx1. must be a research article from either pubmed or google scholar..docx
1. must be a research article from either pubmed or google scholar..docx
 

Recently uploaded

Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupJonathanParaisoCruz
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfadityarao40181
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 

Recently uploaded (20)

Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized Group
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdf
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 

A pilot evaluation of the Family Caregiver Support Program.docx

  • 1. A pilot evaluation of the Family Caregiver Support Program Ya-Mei Chen a,*, Susan C. Hedrick b, Heather M. Young c a School of Nursing, University of Washington, United States b Health Services, School of Public Health, University of Washington, Research Career Scientist, VA Medical Center, United States c University of Washington, Grace Phelps Distinguished Professor and Director of Rural Health Research Development, Oregon Health and Sciences University, United States Evaluation and Program Planning 33 (2010) 113–119 A R T I C L E I N F O Article history: Received 26 November 2008 Received in revised form 30 July 2009 Accepted 8 August 2009 Keywords: Family Caregiver Support Program Program evaluation Caregiver
  • 2. Support services A B S T R A C T The purposes of this study were to evaluate a federal and state- funded Family Caregiver Support Program (FCSP) and explore what types of caregiver support service are associated with what caregiver outcomes. Information was obtained on a sample of 164 caregivers’ use of eleven different types of support service. Descriptive and comparative analyses were used to detect the differences between users and nonusers of caregiver support services. Six measures included were caregiving appraisal scale, caregiving burden, caregiving mastery, caregiving satisfaction, hour of care, and service satisfaction. Using consulting and education services is associated with lessening of subjective burden; using financial support services is associated with more beneficial caregiver appraisal, such as better caregiver mastery. The findings are practical and helpful for future caregiver service and program development and evaluation and policy making for supporting caregivers. In addition, the evaluation method demonstrated in the study provided a simple and moderately effective method for service agencies
  • 3. which would like to evaluate their family caregiver support services. Published by Elsevier Ltd. Contents lists available at ScienceDirect Evaluation and Program Planning j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / e v a l p r o g p l a n 1. Introduction An estimated 52 million Americans function as informal caregivers of ill or disabled individuals, and 23 percent (22.4 million) of U.S. households are caring for a relative or friend who is at least 50 years old (AARP, 2004; Coleman and Pandya, 2002). One fifth of all family members of seriously ill patients have to quit work or make another major life change in order to provide care, and almost one third report the loss of their entire savings (GAO, 1994). Furthermore, financial or other unmet needs may impede caregivers’ ability to function effectively, both in their own day-to- day lives and in their role as an ongoing support system for their patients (Kristjanson, Atwood, & Degner, 1995; Tringali, 1986). As a result, the need to provide support to caregivers has gradually gained societal attention, and many publicly and privately funded services have been developed to achieve this goal. The National
  • 4. Family Caregiver Support Program, for example, authorizes local Area Agencies on Aging (AAAs) to provide caregivers with various support services, including caregiver training, respite care, and supplemental services, among others. However, caregiver support services vary a great deal, and research findings regarding the * Corresponding author at: Psychosocial & Community Health, Box 357263, University of Washington, Seattle, WA 98195, United States. Tel.: +1 206 685 0819; fax: +1 206 685 9551. E-mail address: [email protected] (Y.-M. Chen). 0149-7189/$ – see front matter . Published by Elsevier Ltd. doi:10.1016/j.evalprogplan.2009.08.002 effects of these services have shown mixed results. The research has also shown the need for a uniform method of evaluating caregiver support services. Feinberg and Newman (2006) studied administrators’ experiences of implementing the National Family Caregiver Support Program in all 50 states in the United States, and showed that there is still a great unevenness in services programs in different states. Because of this, they suggested that a uniform assessment and evaluation tool is necessary in order to better provide services to family caregivers.
  • 5. 1.1. Background of the problem Caregiver support services most commonly provide informa- tion access, caregiver education and training, and respite and supplemental services. Research findings regarding the effects of these services have shown mixed results. Some studies, including those with rigorous designs such as randomized and controlled trials, showed caregiver support services either to have little or no impact on caregivers’ outcomes, or to be effective only for a subgroup of the caregiver population. Other studies, however, showed these services to be effective in different perspectives in supporting family caregivers (Brodaty, Green, & Koschera, 2003; Burns, Nichols, Martindale-Adams, Graney, & Lummus, 2003; Gallagher-Thompson et al., 2003; Lee & Cameron, 2004; Maas et al., 2004; Newcomer, Yordi, DuNah, Fox, & Wilkinson, 1999; Roberts et al., 1999; Toseland, Blanchard, & McCallion, 1995; Zank & Schacke, 2002). To prepare for the current study, we completed a mailto:[email protected] http://www.sciencedirect.com/science/journal/01497189 http://dx.doi.org/10.1016/j.evalprogplan.2009.08.002 Table 1 Summary of 34 studies reviewed. Intervention studied N Sig. positive
  • 6. effects No effects Adult day care/respite services 10 6 4 Caregiver training/counseling/support group 13 6 7 Supplemental services (i.e. meal delivery service, transportation, homemaker, or home aide care) 3 2 1 Coordinated program (i.e. all-inclusive care for elderly, which contains more than one of the three categories described above) 8 3 5 References reviewed: (Berry et al., 1991; Brodaty et al., 2003; Burns et al., 2003; Chang, 1999; Coon et al., 2003; Cox, 1997; Edelman & Hughes, 1990; Fox et al., 2000; Gallagher-Thompson et al., 2003; Gaugler et al., 2003a,b; Gitlin et al., 2003;
  • 7. Gottlieb & Johnson, 1993; Hepburn et al., 2001; Kemper, 1988; Kosloski & Montgomery, 1993; Kosloski & Montgomery, 1994; Krout, 1995; Lawton et al., 1989; Maas et al., 2004; Miller et al., 1999; Montgomery & Borgatta, 1989; Montgomery et al., 1985; Newcomer et al., 1999; Newcomer et al., 1999; Quayhagen et al., 2000; Roberts et al., 1999; Toseland et al., 1995; Toseland et al., 2004; Toseland et al., 2001; Tourigny et al., 2004; Yordi et al., 1997; Zank & Schacke, 2002; Zarit et al., 1998). Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010) 113–119114 systematic review of 34 previous studies of caregiver interven- tions. The studies reviewed were conducted between 1984 and 2004 and used either randomized controlled trials or quasi- experimental controlled designs. Each of these studies investi- gated one of four categories of services as shown in Table 1. Our review of these studies found that only a little more than half of them showed services to result in any benefit for family caregivers. Our review also indicated that different services might be associated with different caregiver outcomes. For example, research studies that found a positive effect of caregiver
  • 8. education and training showed these services to increase caregiver effec- tiveness in solving problems, improve caregiver feelings of competence, and reduce caregivers’ subjective and objective burdens (Brodaty et al., 2003; Burns et al., 2001, 2003; Chang, 1999; Coon, Thompson, Steffen, Sorocco, & Gallagher- Thompson, 2003; Gallagher-Thompson et al., 2003; Gitlin et al., 2003; Hepburn, Tornatore, Center, & Ostwald, 2001; Montoro- Rodriguez, Kosloski, & Montgomery, 2003; Quayhagen et al., 2000; Toseland, McCallion, Smith, & Banks, 2004; Toseland et al., 2001; Weuve, Boult, & Morishita, 2000). In regard to the effect of respite and supplemental services, on the other hand, studies that found a positive effect showed these services to decrease caregiver stress; decrease feelings of role overload, depression, burden, and time commitment; and improve overall psychological well-being (Berry, Zarit, & Rabatin, 1991; Cox, 1997; Gaugler et al., 2003a,b; Gottlieb & Johnson, 1993; Krout, 1995; Montgomery & Borgatta, 1989; Okamoto, Murashima, & Saito, 1998; Zarit, Stephens, Townsend, & Greene, 1998). These results demonstrate the difficulty of evaluating caregiver support services. Most evaluation tools used in previous studies were likely to assess one particular aspect of the services’ outcomes, such as caregiver burden, more than other outcomes, such as caregiver mastery. These methods of evalua- tion may result in nonsignificant findings where the tool chosen does not focus on the appropriate caregiver outcomes. Therefore, developing a uniform evaluation method that is broad enough to
  • 9. cover multiple facets of caregiver outcomes is a challenging but important task. In addition, understanding whether different services relate to different caregiver outcomes, and which services might best support particular caregiver outcomes, could be very helpful for choosing or developing evaluation methods and tools. 1.2. Purpose of study The purposes of this study were twofold: the first purpose was to test a simple evaluation method that would be easy for service agencies to adopt and that could be adopted on a wide scale. The second purpose was to determine whether different types of caregiver support services are associated with different caregiver outcomes. We collaborated with Aging and Disability Services (ADS) in Seattle, the local AAA, to achieve our purpose through a pilot study that evaluated a federal- and state-funded project, the Family Caregiver Support Program (FCSP), in King County in Washington State. In this region, the FCSP provides various caregiving support services including adult day care, in-home respite, information services, and financial assistance to the caregiver (ADS, 2003). 2. Methods 2.1. Design, setting, and participants This study was a descriptive and one-time survey of caregivers living in King County who were reported as having received services from local service agencies of ADS’. Four local agencies agreed to send out an invitation letter and questionnaire to all
  • 10. caregivers who had received FCSP-funded services between 2001 and 2003. The University of Washington Human Subjects Division approved this study. 2.2. Questionnaire development The researchers assisted the FCSP team in selecting tools appropriate for evaluating the FCSP. Several tools were selected for review, including the ‘‘Caregiver Appraisal Scale’’ (Lawton & Brody, 1969), the ‘‘Subjective and Objective Burden Scale’’ (Montgomery, Gonyea, & Hooyman, 1985), and the ‘‘Mastery Scale’’ (Pearlin & Schooler, 1978). To ensure the usefulness of the evaluation tool, and with the intention of selecting a tool on the basis of both successful scientific evidence and hands-on experience, the team invited the four local caregiver service agencies who agreed to send out an invitation letter and questionnaire to all caregivers to contribute their expertise. After thorough discussion, the team selected the ‘‘Caregiver Appraisal Scale’’ (CAS) developed by M.P. Lawton and E.M. Brody (1965) for the appropriateness of its language and its coverage of the broad scope of relevant caregiver experiences (Vitaliano, Young, & Russo, 1991). Other tools reviewed target only a single facet of caregiving experiences, and single-perspective tools were less adequate for the purposes of the current study. The FCSP is a program with multiple
  • 11. components, which include various types of services, and it is likely that caregivers’ experiences are multifaceted as well. The agencies consulted further suggested reducing the length of the CAS questionnaire in order to not overly stress caregivers. Consequently, three subscales were chosen for use in the study: ‘‘Subjective Burden’’ (e.g., ‘‘Your health has suffered because of the care you must give to care receiver’’ or ‘‘Very tired as a result of caring for care receiver’’), ‘‘Caregiving Mastery’’ (e.g., ‘‘I can fit in most of the things I need to do in spite of the time taken by caring for care receiver’’), and ‘‘Caregiving Satisfaction’’ (e.g., ‘‘Helping care receiver has made you feel closer to him/her’’ or ‘‘Care receiver shows real appreciation of what you do for her/him’’). The length of the revised Caregiver Appraisal Scale (hereafter referred to as ADS-CAS) was thereby reduced from 47 to 34 items, with 13 items, 12 items, and 9 items each for the ‘‘Subjective Burden,’’ ‘‘Caregiving Mastery,’’ and ‘‘Caregiving Satisfaction’’ subscales, respectively. The two subscales of the CAS that were not used in this study are ‘‘Impact of Caregiving’’ and ‘‘Cognitive Reappraisal.’’ The former Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010) 113–119 115 was excluded because of its high correlation with the
  • 12. ‘‘Subjective Burden’’ subscale (Deeken, Taylor, Mangan, Yabroff, & Ingham, 2003; Lawton, Kleban, Moss, Rovine, & Glicksman, 1989). The latter was excluded as not reflecting the purposes of the FCSP. Participants responded to each item on the ADS-CAS based on a 5-point scale, from ‘‘Rarely or never (1)’’ to ‘‘Most of the time (5).’’ Higher total and subscale scores represent more positive caregiv- ing appraisals, except for the Subjective Burden subscale, where higher scores indicate caregivers perceived higher subjective burden. In addition to the three subscales, the survey gathered information on caregivers’ age, gender, and relation to care receivers; the type of care provided; the hours of care (including hands-on and supervisory care) provided in the week prior to the survey; and the types of services that the caregivers received. Agencies reported providing a list of services, including (1) information about services, (2) assistance in accessing services, (3) caregiver counseling, (4) caregiver education and training, (5) financial assistance, (6) respite services/adult day care, (7) help with housework, (8) delivered meals, (9) transportation, (10) cash support. Caregivers were asked whether they had received each of these services. As a result of discussions with the FCSP team and with local service agencies, we further grouped these 10 services
  • 13. into three categories based on the nature of the services: (1) counseling and education services, (2) respite and supplemental services, and (3) financial support services. The counseling and education services category included information about services, assistance in accessing services, caregiver counseling, and care- giver education and training. The respite services category included respite services/adult day care, help with housework, delivered meals, and transportation. The financial support services category included financial assistance and cash support for caregiving. A general service satisfaction question was also included in the ADS-CAS, with a 4-point scale response: ‘‘Poor (1),’’ ‘‘Fair (2),’’ ‘‘Good (3),’’ or ‘‘Excellent (4).’’ 2.3. Data collection methods Each agency sent each of their clients a cover letter, a questionnaire, and a postage-paid return envelope addressed to ADS. To protect the clients’ confidentiality, the questionnaires were anonymous, and no follow-up occurred. A total of 866 survey packets were sent out, and 177 questionnaires (20.4%) were returned. 2.4. Quantitative data analysis Data analyses were conducted using the Statistical Package for the Social Sciences (SPSS-PC) version 12.0. Prior to analyzing the data, all items were examined to assess the accuracy of variable calculations and missing values. If variables were missing at a rate larger than 5%, multiple imputation was applied (Rubin, 1977; Schafer, 1997, 1999, 2000). Cronbach’s alpha was used to evaluate
  • 14. the internal consistency of each subscale on the ADS-CAS. Descriptive analyses were used to depict the characteristics of caregivers and the services they received. Two steps were included in the evaluation method. First, we examined the gaps between the types of care that the caregivers provided and the types of services that the caregivers received. Second, MANCOVA were used to compare caregivers’ appraisals in the following categories: (1) those who reported using any of the 10 services versus those who did not; (2) caregivers who used one particular service versus those who did not report using that particular service (For example, in comparing caregivers who had received financial services with caregivers who reported they had not received financial services, ‘‘users’’ may have received other services as well as financial services; ‘‘nonusers’’ of financial services may have received other services or may have reported not receiving any services); and (3) those who used only one out of the three categories of services versus those who did not use that particular service category. (For example, caregivers who had received services in the financial services category only, and no services from other categories, were compared with caregivers who reported they had not received any services from the financial services category. ‘‘Nonusers’’ in a category
  • 15. may have received services in other categories, or may have reported not receiving any services at all.) Clients’ age, gender, and the number of care activities they provided were controlled as covariates. The outcome measures were: (1) the item mean of ADS-CAS, (2–4) the item means of each of the three subscales of ADS-CAS, (5) the total hours the caregiver spent on caregiving during the previous week, and (6) the caregiver’s satisfaction with services received. 2.5. Text summary Caregivers’ text feedback was summarized and analyzed for common themes using content analysis. 3. Results The response rate was 20.4%. Five questionnaires were returned blank, and two were returned with only text information. Furthermore, 6 caregivers stated that they were not providing any care at this point. As a result, only 164 questionnaires were entered for quantitative data analysis, an 18.9% usable response rate. 3.1. Psychometric properties of ADS-CAS Most items in ADS-CAS were missing cases at a rate between 9% and 13%. Therefore, multiple imputation was used (Rubin, 1977; Schafer, 1997). After reverse coding and multiple imputation, Cronbach’s alphas for ADS-CAS were 0.90. The power to detect statistically significant differences in ADS-CAS between
  • 16. caregivers who reported having received at least one of the services and those who reported not receiving any services was .78. 3.2. Description of care provided by caregivers and of services provided to caregivers About 74% of caregivers were female, 17% were male, and 9% did not specify their gender. Their ages ranged from less than 20 to more than 81 years old, with an average age of 57. The majority stated that they were caring either for a spouse/partner (48.8%) or for parents (41.2%). Caregivers provided from one to nine types of care to their care receivers, with an average of 6.8 (SD = 2.24). About one third provided all nine kinds of caregiving activities listed in the questionnaire, including personal care, safety/supervision, house- keeping and laundry, meal preparation, medication monitoring, transportation, shopping, financial management, and standby help. The most common type of care provided was transportation (83.5%). Caregivers reported receiving a range of zero to seven ADS services, with an average of 1.91 (SD = 1.54). The service most commonly used was information about services (52.4%). A surprising percentage (14.6%) stated that they had not received any services, even though all the caregivers surveyed had been identified by agencies as service recipients. These caregivers were labeled ‘nonusers’ and used as the comparison group in the first analysis. However, they cannot fully represent the real nonusers
  • 17. in the U.S. caregiver population. We discuss this issue further in the discussion section. Table 2 MANCOVA results: marginal mean differences between users of any services and nonusers, users of a particular service and nonusers of that particular service, and users of a single service category and nonusers of that service category (N = 164). ADS-caregiver appraisal scale CAS SB CM CS HOUR SS Individual services Use or non-use of services �3.89 �1.58 �3.00** �3.46* 21.65 �0.03 Services information �1.39 �0.38 �2.02* �0.11 �11.27 �0.04 Assistance in accessing services 1.91 �1.53 �1.28 1.66 �11.72 0.28* Caregiver counseling 1.48 �1.09 0.08 0.30 �12.03 0.32* Caregiver training or education 2.36 �2.28 0.82 �0.74 �14.08 0.09 Financial assistance �10.82 3.57 �4.67* �2.49 �23.15 0.35 Respite services �7.69 2.02 �2.97 �2.70* 45.91*** 0.16 Help with housework �3.05 �0.40 �4.45* �1.0 4.46 0.17 Delivered meals �0.15 0.48 �0.25 �0.58 3.63 0.07 Transportation �5.81 1.06 �1.42 �3.33* �23.76 �0.15 Cash to support caregiving �2.40 3.70 0.02 1.28 �20.49 0.51*
  • 18. Service catagories Counseling and education 2.81 �2.80 �0.33 0.33 �15.39 �0.11 Respite �3.24 1.23 0.27 �2.28 39.28*** �0.05 Finance 28.17 �11.66 12.57* 6.63 7.38 0.06 [Bold] = 0.05 < p < 0.070; (users’ scores – nonusers’ scores). Note: CAS, CAS score; SB, subjective burden score; CM, caregiver mastery score; CS, caregiver satisfaction score; HOUR, hours of care; SS, service satisfaction. Controlled for caregivers’ age, gender, and number of caregiving activities that they have provided as covariates. * p < 0.05 [bold]. ** p < 0.01 [bold]. *** p < 0.001 [bold]; (users’ scores – nonusers’ scores). Table 3 Text summary. Category Keywords Frequency Exhausted caregivers Problem 4 Hard 3 Frustrate/stress/tire 6 My health 1 Collapse 1 Appreciation Thank/appreciate/grateful 16 Wonderful 4
  • 19. Help needed Available 9 Need/need. . .help 32 Aware 1 Financial 6 [Staff] change 2 Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010) 113–119116 3.2.1. Gaps between care provided by caregivers and services provided to caregivers Potential gaps were found between care provided and services received. The most common types of care provided by the caregivers were transportation, financial management, and medication monitoring. However, the most common services that these caregivers claimed to have received were services informa- tion, respite care, and assistance in accessing services. On the one hand, this indicates a good availability of the former three services. On the other hand, the services provided may not match what caregivers need most, such as help with transportation. For example, only 9.8% of caregivers received transportation services, while 83.5% of caregivers provided such services to their relatives. Another potential gap worth noting is the high rate of medication
  • 20. management assistance (79.9%) provided by caregivers versus the low rate of training and education received by caregivers (14%). 3.3. Mean score differences in outcome measures (ADS-CAS) between users of any services and nonusers, users of a particular service and nonusers of that particular service, and users of a single service category and nonusers of that service category After controlling for caregivers’ age, gender, and the number of care activities provided, caregivers who received assistance in accessing services, who used caregiver counseling services, or who obtained cash to support caregiving showed significantly higher satisfaction toward the services that they received (p < 0.05). Caregivers who received information about services, financial assistance, and help with housework reported lower caregiver mastery than did caregivers not using such services (p < 0.05). Analysis of service categories revealed additional relationships. The caregivers who received only financial support services showed significantly higher mastery (p < 0.05) than did those who did not use such services. The caregivers who received only respite services spent an average of 39.28 more hours caring for care recipients in the week prior to our survey (p < 0.01) than did those not using respite services. Analysis also showed that caregivers who received only counseling and education services perceived less subjective caregiving burden (p = 0.056) than did others, and that caregivers who received only financial support services showed better overall caregiver appraisal (p = 0.058).
  • 21. Analysis both of individual services and of the three service categories revealed similar findings; therefore, a discussion of individual services will not be given here. Table 2 presents full results for both individual services and the three service categories. The results of the three analyses are presented in Table 2 as mean score differences between users of services and nonusers, users of a particular service and nonusers of that particular service, and users of a single service category and nonusers of that service category. 3.4. Content analysis of open-ended comments Seventy-two caregivers entered textual comments on their questionnaires. Content analysis yielded 12 keywords and three themes: (1) exhausted caregivers (keywords: problem, hard, frustrate/stress/tire, my health, collapse), (2) appreciation of services received (keywords: thank/appreciate/grateful, wonder- ful), and (3) services needed (keywords: availability, need/ need. . .help, aware, financial, [staff]. . .change). Many caregivers reported fatigue due to their caregiver responsibilities. There were services that they indicated should be developed or improved in order to provide better support. The need for integrative services was frequently cited: ‘‘I was frustrated though that there is no one person and agency available to counsel or ‘pull the picture’ together.’’ Caregivers also reported difficulties in assessing their needs and in determining which services were available to meet these needs:
  • 22. Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010) 113–119 117 ‘‘Often services tell you what they don’t do rather than ask what you need help with in caring for the person. By the time you sort through it all you find you don’t/can’t use services.’’ Some caregivers reported that it took too long to search for and then wait for services and to solve their immediate problems: ‘‘I wish I had taken this class before I got so worn out.’’ ‘‘I wish I knew these services were available before my mom passed away.’’ ‘‘If we can get transportation I expect and hope this will help [to solve the immediate conflict between this caregiver and his/her spouse].’’ (Please see Table 3 for the frequency of each key word in caregivers’ text feedback). 4. Discussion Findings from the current study can contribute not only to the growing body of research in the area of caregiving support but also to the future development of caregiving support services in the King County region. 4.1. Different caregiver appraisals between service users and nonusers The use of different services was associated with different
  • 23. caregiver appraisals, and these findings add a great deal to caregiving research literature. Using counseling and education services, such as caregiver counseling services, was associated with a lessening of subjective burden; and using financial support services, such as cash support, was associated with a more beneficial caregiver appraisal. Although in this study using respite and supplemental services was not associated with any beneficial outcome from the caregivers’ point of view, it is still important to provide such support services. Details about each service category are discussed below. 4.2. Counseling and education services Research findings have shown that counseling and education services are effective in helping caregivers to deal with their own psychological needs and in improving caregivers’ relationships with care recipients (Brodaty et al., 2003; Burns et al., 2003; Coon et al., 2003). The findings in the current study support this literature. By using counseling and education services, caregivers reduced their subjective burden. Our participants provided numerous remarks explaining that having someone to talk to or attending a support group or counseling class can prevent caregiver ‘‘burn out,’’ and they said that because of these services they saved time that would otherwise have been spent dealing with their negative emotions. However, it is also important to point out the possibility that caregivers who already perceive
  • 24. fewer burdens would be more likely to use this type of service. Those caring for someone with more severe disabilities and those who have fewer sources of caregiving help may perceive a higher burden and therefore have less energy to use these services (Markle-Reid & Browne, 2001; Toseland, McCallion, Gerber, & Banks, 2002). It is important to consider whether or not counseling and education services are more useful for caregivers with light care loads. A full examination of this topic will require future studies with more rigorous methodologies (such as randomized controlled trials or quasi-experimental designs). 4.3. Financial services Compared to the other two categories of service, using financial support services was associated with more positive caregiving appraisals. These services provide a flexible pool of funds to Medicaid-eligible persons to purchase goods or services for family caregivers. Since, according to one study, one third of family caregivers reported the loss of all of their family savings (GAO, 1994), providing financial services is likely to give caregivers the opportunity to focus on their caregiving activities and to develop higher confidence and satisfaction. However, we should not ignore another possible explanation—that caregivers who were able to gain access to these funds were competent users of the system who already had higher caregiving mastery and caregiving appraisal. Providing financial support to caregivers is a relatively new service developed in the last 15 years (Doty, Jackson, & Crown, 1998).
  • 25. Only a limited number of research studies have examined the effect of financial support services for caregivers (Eckert, Morgan, & Swamy, 2004; Mahoney, Simon-Rusinowitz, Loughlin, Desmond, & Squil- lace, 2004). The findings of this study encourage further investigation of the cost-effectiveness of providing financial support services to caregivers. 4.4. Respite and supplemental services In contrast to findings in previous literature (Cox, 1997; Gaugler et al., 2003a,b; Krout, 1995; Zarit et al., 1998), this study found that caregivers using respite and supplemental services spent more hours on caregiving than nonusers and they did not show any positive caregiver appraisals. Although caregivers in the current study who used this type of service did not report any beneficial outcomes, it is important to recognize that this group of caregivers might be under a great deal of stress due to their care responsibilities and that they might still have a significant need for such support. Further analysis showed that older caregivers were the group who most used respite services and who most often requested help with housework; this group also spent more time on caregiving. This group of caregivers was more likely to be made up of spouses than of children, and they may also have more health problems of their own. Thus, this group might be more likely to be on the edge of giving up caregiving out of exhaustion. The
  • 26. services that they received apparently did not meet their needs. It is crucial to learn more about the needs of this group of caregivers and to modify these services to meet their needs. 4.5. Services that should be developed Helping caregivers to ‘‘pull the picture together’’ should be the first task for case managers and service providers when first contacting caregivers. Caregivers are already exhausted from their caregiving tasks, and it is an added burden for them to try to find support from different resources. It will be crucial to develop a single window that could both provide all the information that caregivers need and help them access services in a more efficient manner. This would assure that available services are used by those who need them. Service availability is an important issue, and it requires more attention from providers and policy makers. Several potential gaps were noted between care activities and services received by caregivers, such as in transportation and medication management. These potential gaps may either indicate low service availability or accessibility, which needs improvement, or may simply indicate that caregivers were confident in their ability to provide such care activities and had no need for additional support services. Given that transportation was the most common type of care activity provided by caregivers in the current study, and
  • 27. was commonly mentioned in the text feedback from our caregivers, it would likely be one of the services that caregivers would use if it were more available to them. The results showed that caregivers need access to transportation services and expect that getting such services would solve their current problems. Furthermore, in a recent national study, the rate of transportation services used by caregivers was almost twice the rate reported in the current study (9.8% in the current study vs. 18% in the AARP study) (AARP, 2004). This suggests that there is a need to make this service more available to caregivers in King County. Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010) 113–119118 There is also a potential gap between the high rate of medication management assistance and the low rate of training services on this subject. The percentage of caregivers in the current study who provided medication monitoring was much higher than the percentage of caregivers in the national study who did so (80% vs. 41%) (AARP, 2004). This strongly suggests the importance of providing more medication management education programs for caregivers in this region. The low rate of caregiver education and training programs in this region indicates a gap needs to be
  • 28. filled. For future study on caregivers living in Seattle/King County, it might be important to investigate their knowledge about the medications they give to their care receivers and what kind of support they need to help them perform this care activity better. Another potential gap worth noting is that 14.6% of caregivers stated that they did not receive any services. All the caregivers surveyed were listed by the agencies as having received some form of services. It is intriguing that this group either did not remember receiving or did not believe they had received services. It could be that the amount of services received was not substantial enough for caregivers to note, or that the services provided were not what these caregivers were looking for. Both possibilities indicated inadequate services on this topic and required research study to further investigate. Also, further study of the differences between this group of caregivers, who likely received some services they did not remember or report, and real nonusers, who are in need but do not receive any services, will be important. 5. Limitations There are several limiting methodological issues in this study. The cross-sectional design made it impossible to draw causal inferences. The long and variable time between when caretakers received services and when they responded to the ADS-CAS was a threat to validity. Adding a variable to assess the time between service use and survey response is recommended for future
  • 29. studies. Moreover, past research has found that caregivers’ perceptions of distress may be influenced by different factors at different stages of their caregiving (Vitaliano et al., 2002). Therefore, a longitudinal follow-up would help to determine the optimal time to provide caregivers with certain kinds of services, and this information would be valuable for making future policy. Another limitation of this study was the low response rate of 20.4%. We have explored potential reasons for the low response rate. The first challenge that might have contributed to the low response rate is the fact that many caregivers do not self-identify with the term ‘‘caregiver.’’ That is probably why five questionnaires were returned blank. This has been a recurring theme and a challenge for the implementation and evaluation of family caregiver supports in the United States (Feinberg & Newman, 2006). Furthermore, the low response rate may well have to do with the substantial length of the study’s questionnaire. Jepson, Asch, Hershey, & Ubel (2005) studied the correlation between response rate and length of questionnaires and suggested that questionnaires above a threshold of 1000 words have lower response rates. Our survey questionnaire was over 5000 words, even after we removed two subscales from the CAS.
  • 30. Detailed information about care recipients was not collected for the same reasons stated above. This may limit the study finding to be generated to the caregiver population. We believe that the characteristics of caregivers in the current study may be close to the general caregiver population; the current study’s demographics show similar composition of age, gender, and number of services provided and received compared with caregiver demographics in the National Family Caregiver Study (AARP, 2004). There may well be differences in other variables, of course. Increasing the response rate in future work will be important and can be addressed by further decreasing the length of the questionnaire. Other methods that might increase the response rate in future studies include offering incentive payments, performing a follow-up survey, or providing a token for increasing response rates. 6. Lessons learned 6.1. Lesson learned for health care professionals Findings from this study provide information for community service providers, such as community nurses, case managers, and social workers, to better understand the relationships between caregivers’ service use and caregiver appraisals. Sometimes caregiving responsibilities begin without any warning, and care- givers have no time to prepare themselves before assuming these responsibilities. They may not know what services are available or what services could be the most helpful. Knowledge generated
  • 31. from the current study can help case managers and service providers to help caregivers and care recipients anticipate and prioritize their needs, and to better support caregivers with the services they need most. For example, if a family caregiver expresses great subjective burden, our study findings suggest that case managers should think about offering counseling and education services first. 6.2. Lesson learned for area agencies of aging in the United States Our experience of evaluating the FCSP program at the county level will be beneficial for other AAAs, particularly with respect to our collaboration experience with local service agencies. This study helps to address Feinberg and Newman’s (2006) call for a uniform assessment tool that can help us to understand and redress the unevenness in current caregiver service programs. The tool that we developed as a result of this study has great potential to become a standard tool for other states or AAAs to use. Our report of the gaps between the care provided and the services received by caregivers in King County is another important method of looking at service adequacy in a region, and this also could be easily adopted in other areas. 6.3. Lesson learned for future questionnaire development
  • 32. The evaluation of individual services revealed similar findings to the evaluation of the three service categories. Therefore, we recommend listing three service categories in future question- naires instead of listing all of the detailed services provided by agencies. Listing all the individual services not only increases the time required to complete the questionnaire, but might also unnecessarily confuse caregivers, since questionnaires that list individual services require the caregiver to be able to identify what particular service(s) they received. 7. Conclusion This pilot study was designed to evaluate the FCSP, but it also provided valuable information about the effect of caregiver support services as well as an understanding of what types of services might be associated with particular caregiver outcomes. These findings are useful to community care professionals and are also of practical value to program planners, policy makers, and formal care providers. The study’s findings can also serve as a basis for more rigorous future evaluations of caregiver support services. Acknowledgements The authors greatly appreciate the support and advice of Rosemary Cunningham, Margaret Casey, and all of the team members on the Family Caregiver Support Program at Aging and Y.-M. Chen et al. / Evaluation and Program Planning 33 (2010)
  • 33. 113–119 119 Disability Services. The authors would also like to extend their gratitude to Senior Services, the Evergreen Healthcare-Geriatric Regional Assessment Team, the Northshore Senior Center, and the Kin On Community Caregiver Network-Caregiver Support. Their gracious help made this study possible. References AARP. (2004). Caregiving in the U.S.: National Alliance for Caregiving and AARP. ADS. (2003). Cash and Counseling Pilot Project. Seattle: Aging and Disability Services. Berry, G. L., Zarit, S. H., & Rabatin, V. X. (1991). Caregiver activity on respite and nonrespite days: A comparison of two service approaches. Gerontologist, 31(6), 830–835. Brodaty, H., Green, A., & Koschera, A. (2003). Meta-analysis of psychosocial interven- tions for caregivers of people with dementia. Journal of the American Geriatrics Society, 51(5), 657–664. Burns, L. R., Walston, S. L., Alexander, J. A., Zuckerman, H. S., Andersen, R. M., Torrens, P. R., et al. (2001). Just how integrated are integrated delivery systems? Results from a national survey. Health Care Management Review, 26(1), 20– 39. Burns, R., Nichols, L. O., Martindale-Adams, J., Graney, M. J., & Lummus, A. (2003).
  • 34. Primary care interventions for dementia caregivers: 2-year outcomes from the REACH study. Gerontologist, 43(4), 547–555. Chang, B. L. (1999). Cognitive-behavioral intervention for homebound caregivers of persons with dementia. Nursing Research, 48(3), 173–182. Coleman, B. P., & Pandya, S. M. (2002). Family caregiving and long-term care. Washing- ton, DC: Public Policy Institute; AARP. Coon, D. W., Thompson, L., Steffen, A., Sorocco, K., & Gallagher-Thompson, D. (2003). Anger and depression management: Psychoeducational skill training interventions for women caregivers of a relative with dementia. Gerontologist, 43(5), 678–689. Cox, C. (1997). Findings from a statewide program of respite care: A comparison of service users, stoppers, and nonusers. Gerontologist, 37(4), 511–517. Deeken, J. F., Taylor, K. L., Mangan, P., Yabroff, K. R., & Ingham, J. M. (2003). Care for the caregivers: A review of self-report instruments developed to measure the burden, needs, and quality of life of informal caregivers. Journal of Pain and Symptom Management, 26(4), 922–953. Doty, P., Jackson, M. E., & Crown, W. (1998). The impact of female caregivers’ employment status on patterns of formal and informal eldercare. Gerontologist, 38(3), 331–341.
  • 35. Eckert, J. K., Morgan, L. A., & Swamy, N. (2004). Preferences for receipt of care among community-dwelling adults. Journal of Aging & Social Policy, 16(2), 49–65. Edelman, P., & Hughes, S. (1990). The impact of community care on provision of informal care to homebound elderly persons. Journal of Gerontology, 45(2), S74–S84. Feinberg, L. F., & Newman, S. L. (2006). Preliminary experiences of the states in implementing the National Family Caregiver Support Program: A 50-state study. Journal of Aging & Social Policy, 18(3–4), 95–113. Gallagher-Thompson, D., Coon, D. W., Solano, N., Ambler, C., Rabinowitz, Y., & Thomp- son, L. W. (2003). Change in indices of distress among Latino and Anglo female caregivers of elderly relatives with dementia: Site-specific results from the REACH national collaborative study. Gerontologist, 43(4), 580–591. GAO. (1994). Long Term Care Population. Washington, D.C.: United States General Accounting Office. Gaugler, J. E., Jarrott, S. E., Zarit, S. H., Stephens, M. A., Townsend, A., & Greene, R. (2003a). Adult day service use and reductions in caregiving hours: Effects on stress and psychological well-being for dementia caregivers. International Journal of Geriatric Psychiatry, 18(1), 55–62.
  • 36. Gaugler, J. E., Jarrott, S. E., Zarit, S. H., Stephens, M. A., Townsend, A., & Greene, R. (2003b). Respite for dementia caregivers: The effects of adult day service use on caregiving hours and care demands. International Psychogeriatrics, 15(1), 37–58. Gitlin, L. N., Winter, L., Corcoran, M., Dennis, M. P., Schinfeld, S., & Hauck, W. W. (2003). Effects of the home environmental skill-building program on the caregiver-care recipient dyad: 6-month outcomes from the Philadelphia REACH Initiative. Ger- ontologist, 43(4), 532–546. Gottlieb, B., & Johnson, J. (1993). Impact of Day Care Programs on Family Caregivers of Persons with Dementia. Guelph, Ontario: Gerontology Research Centre, University of Guelph. Hepburn, K. W., Tornatore, J., Center, B., & Ostwald, S. W. (2001). Dementia family caregiver training: Affecting beliefs about caregiving and caregiver outcomes. Journal of the American Geriatrics Society, 49(4), 450–457. Jepson, C., Asch, D. A., Hershey, J. C., & Ubel, P. A. (2005). In a mailed physician survey, questionnaire length had a threshold effect on response rate. Journal of Clinical Epidemiology, 58(1), 103–105. Kosloski, K., & Montgomery, R. J. V. (1993). The effects of respite on caregivers of Alzheimer’s patients: One year evaluation of the Michigan
  • 37. model respite programs. Journal of Applied Gerontology, 12(1), 4–7. Kosloski, K., & Montgomery, R. J. (1994). Investigating patterns of service use by families providing care for dependent elders. J Aging Health, 6(1), 17–37. Kristjanson, L. J., Atwood, J., & Degner, L. F. (1995). Validity and reliability of the family inventory of needs (FIN): Measuring the care needs of families of advanced cancer patients. Journal of Nursing Measurement, 3(2), 109–126. Krout, J. A. (1995). Senior centers and services for the frail elderly. Journal of Aging & Social Policy, 7(2), 59–76. Lawton, M. P., & Brody, E. M. (1969). Assessment of older people; self-maintaining and instrumental activities of daily living. The Gerontologist, 9, 179–186. Lawton, M. P., Kleban, M. H., Moss, M., Rovine, M., & Glicksman, A. (1989). Measuring caregiving appraisal. Journal of Gerontology, 44(3), P61–71. Lee, H., & Cameron, M. (2004). Respite care for people with dementia and their carers. Cochrane Database of Systematic Reviews, 2, CD004396. Maas, M. L., Reed, D., Park, M., Specht, J. P., Schutte, D., Kelley, L. S., et al. (2004). Outcomes of family involvement in care intervention for caregivers of individuals with dementia. Nursing Research, 53(2), 76–86.
  • 38. Mahoney, K. J., Simon-Rusinowitz, L., Loughlin, D. M., Desmond, S. M., & Squillace, M. R. (2004). Determining personal care consumers’ preferences for a consumer-direc- ted cash and counseling option: Survey results from Arkansas, Florida, New Jersey, and New York elders and adults with physical disabilities. Health Services Research, 39(3), 643–664. Markle-Reid, M., & Browne, G. (2001). Explaining the use and non-use of community- based long-term care services by caregivers of persons with dementia. Journal of Evaluation in Clinical Practice, 7(3), 271–287. Montgomery, R. J., & Borgatta, E. F. (1989). The effects of alternative support strategies on family caregiving. Gerontologist, 29(4), 457–464. Montgomery, R. J. V., Gonyea, J. G., & Hooyman, N. R. (1985). Caregiving and the experience of subjective and objective. Family Relations, 34, 19–26. Montoro-Rodriguez, J., Kosloski, K., & Montgomery, R. J. (2003). Evaluating a practice- oriented service model to increase the use of respite services among minorities and rural caregivers. Gerontologist, 43(6), 916–924. Newcomer, R., Yordi, C., DuNah, R., Fox, P., & Wilkinson, A. (1999). Effects of the Medicare Alzheimer’s Disease Demonstration on caregiver burden and depression. Health Services Research, 34(3), 669–689.
  • 39. Okamoto, M., Murashima, S., & Saito, E. (1998). Effectiveness of day care service for elderly patients with dementia and their caregivers as observed by comparison of days with and without day care services. Nippon Koshu Eisei Zasshi, 45(12), 1152–1161. Pearlin, L. I., & Schooler, C. (1978). The structure of coping. Journal of Health and Social Behavior, 19(1), 2–21. Quayhagen, M. P., Quayhagen, M., Corbeil, R. R., Hendrix, R. C., Jackson, J. E., Snyder, L., et al. (2000). Coping with dementia: Evaluation of four nonpharmacologic inter- ventions. International Psychogeriatrics, 12(2), 249–265. Roberts, J., Browne, G., Milne, C., Spooner, L., Gafni, A., Drummond-Young, M., et al. (1999). Problem-solving counseling for caregivers of the cognitively impaired: Effective for whom? Nursing Research, 48(3), 162–172. Rubin, D. B. (1977). Formalizing the subjective notions about the effect of nonrespondents in sample surveys. Journal of the American Statistical Association, 72(35.), 538–543. Schafer, J. L. (1997). Analysis of incomplete multivariate data. London: Chapman & Hall. Schafer, J. L. (1999). Multiple imputation: A primer. Statistical Methods in Medical Research, 8, 3–15. Schafer, J. L. (2000). Software for multiple imputation, 2000.
  • 40. Retrieved May 5, 2002, from http://www.stat.psu.edu/�jls/misoftwa.html. Toseland, R. W., Blanchard, C. G., & McCallion, P. (1995). A problem solving intervention for caregivers of cancer patients. Social Science & Medicine, 40(4), 517–528. Toseland, R. W., McCallion, P., Gerber, T., & Banks, S. (2002). Predictors of health and human services use by persons with dementia and their family caregivers. Social Science & Medicine, 55(7), 1255–1266. Toseland, R. W., McCallion, P., Smith, T., & Banks, S. (2004). Supporting caregivers of frail older adults in an HMO setting. The American Journal of Orthopsychiatry, 74(3), 349–364. Toseland, R. W., McCallion, P., Smith, T., Huck, S., Bourgeois, P., & Garstka, T. A. (2001). Health education groups for caregivers in an HMO. Journal of Clinical Psychology, 57(4), 551–570. Tourigny, A., Durand, P., Bonin, L., Hebert, R., & Rochette, L. (2004). Quasi-experimental Study of the Effectiveness of an Integrated Service Delivery Network for the Frail Elderly. Can J Aging, 23(3), 231–246. Tringali, C. A. (1986). The needs of family members of cancer patients. Oncology Nursing Forum, 13(4), 65–70.
  • 41. Vitaliano, P. P., Scanlan, J. M., Zhang, J., Savage, M. V., Hirsch, I. B., & Siegler, I. C. (2002). A path model of chronic stress, the metabolic syndrome, and coronary heart disease. Psychosomatic Medicine, 64(3), 418–435. Vitaliano, P. P., Young, H. M., & Russo, J. (1991). Burden: A review of measures used among caregivers of individuals with dementia. Gerontologist, 31(1), 67–75. Weuve, J. L., Boult, C., & Morishita, L. (2000). The effects of outpatient geriatric evaluation and management on caregiver burden. Gerontologist, 40(4), 429–436. Yordi, C., DuNah, R., Bostrom, A., Fox, P., Wilkinson, A., & Newcomer, R. (1997). Caregiver supports: outcomes from the Medicare Alzheimer’s disease demonstra- tion. Health Care Financing Review, 19(2), 97–117. Zank, S., & Schacke, C. (2002). Evaluation of geriatric day care units: Effects on patients and caregivers. The Journals of Gerontology, Psychological Sciences and Social Sciences, 57(4), P348–357. Zarit, S. H., Stephens, M. A., Townsend, A., & Greene, R. (1998). Stress reduction for family caregivers: Effects of adult day care use. The Journals of Gerontology, Psychological Sciences and Social Sciences, 53(5), S267–277. Ya-Mei Chen, Ph.D. MPH, Dr. Chen’s research focus is the
  • 42. development of community- based long-term care services for elders and their family. With her expertise in program intervention and evaluation, Dr. Chen has been involved in several federal and state-funded projected projects to help develop and evaluate programs specific for community elders. Susan C. Hedrick, Ph.D., Dr. Hedrick’s research focus is the cost-effectiveness of interventions to improve care for persons with chronic illnesses. Heather M. Young, Ph.D., Dr. Young’s research and clinical interests focus on envir- onments that promote healthy aging. She has played an instrumental role in shaping long-term care policies in Washington State through her evaluation research. http://www.stat.psu.edu/~jls/misoftwa.html http://www.stat.psu.edu/~jls/misoftwa.htmlA pilot evaluation of the Family Caregiver Support ProgramIntroductionBackground of the problemPurpose of studyMethodsDesign, setting, and participantsQuestionnaire developmentData collection methodsQuantitative data analysisText summaryResultsPsychometric properties of ADS- CASDescription of care provided by caregivers and of services provided to caregiversGaps between care provided by caregivers and services provided to caregiversMean score differences in outcome measures (ADS-CAS) between users of any services and nonusers, users of a particular service and nonusers of that particular service, and users of a single service category and nonusers of that service categoryContent analysis of open-ended commentsDiscussionDifferent caregiver appraisals between service users and nonusersCounseling and education
  • 43. servicesFinancial servicesRespite and supplemental servicesServices that should be developedLimitationsLessons learnedLesson learned for health care professionalsLesson learned for area agencies of aging in the United StatesLesson learned for future questionnaire developmentConclusionAcknowledgementsReferences Guide to Program Evaluation Getting Started What is Evaluation; Types of Evaluation Activities; Benefits of Evaluation; Evaluation Concerns; Evaluation Constraints Planning the Evaluation Are You Ready for Evaluation; Working With an Outside Evaluator; Developing an Evaluation Plan; Developing and Working With Program Logic Models Assessing Program Performance Identifying Goals and Objectives; Measuring Activities and Outputs (Process Evaluation); Measuring Outcomes (Impact Evaluation); Establishing the "Activities-Outcomes" Connection (Evaluation Experiments) Data Collection New or Existing Data; Using Existing Data; Using New Data; Other Considerations
  • 44. Reporting and Using Evaluation Results Reviewing Evaluation Findings With Stakeholders; Writing a Final Report; Using Evaluation Results Getting Started What Is Evaluation? Evaluation is a systematic, objective process for determining the success of a policy or program. It addresses questions about whether and to what extent the program is achieving its goals and objectives. Learn More... A Typology of Evaluation Levels (Office of Juvenile Justice and Delinquency Prevention) An Overview of Education Evaluation (Department of Education) Developing a Strategy for Evaluation (National Institute of Justice) Identifying Effective Criminal Justice Programs: Guidelines and Criteria for the Nomination of Effective Programs (Bureau of Justice Assistance) Underlying Premise of Assessment and Evaluation (Bureau of Justice Assistance)
  • 45. Types of Evaluation Activities Program Monitoring Program monitoring involves the ongoing collection of information to determine if programs are operating according to plan. Monitoring provides ongoing information on program implementation and functioning. Learn More... Basic Monitoring and Comparative Monitoring (Office of Juvenile Justice and Delinquency Prevention) Install a Monitoring System to Provide Continuous Feedback (National Institute of Justice) Selecting an Evaluation Design (National Institute of Justice) http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/typo logy_of_evaluation_levels.htm http://www.ed.gov/offices/OUS/PES/primer1.html http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap ter_1_nij_guide.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impr oving.html http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impr oving.html http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/unde
  • 46. rlying_handbook1.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi c_monitoring.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/inst all_a_monitor.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/eval uation_strategies.html#p8 Performance Measurement/Assessment Program measurement or assessment involves the ongoing collection of information on whether a program is meeting its goals and objectives. Performance measures can address project activities, services delivered, and the products of those services. Learn More... Introduction (Fairfax County Department of Management and Budget, pp. 4-7) Types of Program Performance Assessment (Government Accounting Office) Using Indicators Effectively (Vera Institute of Justice, pp. 2-15) Process or Implementation Evaluation Process evaluation focuses on program implementation and operation. A process evaluation can answer questions regarding program effort; identify processes or procedures used to carry out the functions of the program; and address program operation and performance.
  • 47. Learn More... Documenting and Analyzing Program Installation and Operations (Department of Education) Implement a Process Evaluation to Document What is Done, When, By Whom, To Whom (National Institute of Justice) Process Evaluation (Bureau of Justice Assistance) Outcome or Impact Evaluation This type of evaluation focuses on program success and accomplishments. These evaluations answer questions regarding program effectiveness; address whether a program is achieving its goals and objectives; and examine unintended consequences, both positive and negative. Learn More... Basic Outcome Evaluation and Comparative Outcome Evaluation (Office of Juvenile Justice and Delinquency Prevention) Impact Evaluation (Bureau of Justice Assistance) Observing Behavioral Outcomes and Attributing Changes to the Program (Department of Education) http://www.co.fairfax.va.us/gov/omb/Basic_Manual.pdf http://www.gao.gov/special.pubs/gg98026.pdf http://www.vera.org/publication_pdf/207_404.pdf
  • 48. http://www.ed.gov/offices/OUS/PES/primer4.html http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impl ement_a_process.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/proc ess_evaluation_gangs.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi c_outcome_evaluation.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi c_outcome_evaluation.htm#comparative http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/imp act_eval_gangs.htm http://www.ed.gov/offices/OUS/PES/primer5.html Cost-Effectiveness and Cost-Benefit Assessment These assessments focus on using the results from a sound program evaluation to assess how effective the program is relative to other program alternatives in terms of cost. Cost-benefit analysis does not answer the question of whether the program works; instead, it uses the results of evaluations to compare the economic value of the outcomes and costs of one program with another. Learn More... Comparative Costs and Benefits of Programs to Reduce Crime, Version 4.0 (Washington State Institute for Public Policy) Distinguishing Cost-Benefit Analysis from Program Evaluation (Justice Research and Statistics Association, p. 6)
  • 49. Benefits of Evaluation Programs that participate in evaluations will obtain objective information about their performance and how it can be improved. Evaluation can provide objective evidence that a program is effective, demonstrating positive outcomes to funding sources and the community. It can help improve program effectiveness and can create opportunities for programs to share information with other similar programs and agencies. Programs can use evaluation findings in a number of ways. For example, the program, to make a case for continued funding and to attract new funding sources, can use evidence of program success. A well-executed evaluation will point out areas in which the program can improve its operations. Also, sharing the results of evaluation has benefits to others outside of the program seeking to replicate justice interventions that work. Learn More... Benefits of Evaluation (Department of Housing and Urban Development) Introduction (National Institute of Justice)
  • 50. http://www.wsipp.wa.gov/rptfiles/costbenefit.pdf http://www.jrsa.org/pubs/juv-justice/briefing_cost-benefit.html http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/bene fits_of_evaluation.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/nijg uide-intro.htm Evaluation Concerns Program managers and staff can sometimes be reluctant participants in the evaluation process. Below are some frequently-expressed concerns about program evaluation and responses to those concerns. Concern: Evaluation draws resources away from program services. Response: Without evaluation, how do you know that the services being provided are effective? Program managers can explore options for obtaining evaluation services inexpensively. Concern: Evaluation increases the burden on program staff. Response: Evaluators can often implement changes to current client data collection procedures, resulting in little additional effort on the part of program staff. To reduce the burden and increase "buy-in," program staff should be involved in designing evaluation instruments and interpreting
  • 51. evaluation findings. Concern: Evaluation is too complicated for program managers and staff to understand. Response: An evaluation does not need to have the most rigorous scientific method, design, and analysis to be considered useful and valuable. Evaluation findings should be expressed in a manner that can be readily understood and used by program managers, staff, and other stakeholders. Concern: Evaluation may produce negative results that will harm the program. Response: A good evaluation will point out both program strengths and weaknesses. No reputable evaluator will willingly participate in an evaluation designed to harm a program. Learn More... Common Concerns about Evaluation (Department of Housing and Urban Development) Guide to Frugal Evaluation for Criminal Justice (National Institute of Justice, Chapter 6)
  • 52. http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/com mon_concerns_about_eval.htm http://www.ncjrs.org/pdffiles1/nij/187350.pdf Evaluation Constraints Every evaluation is carried out under certain constraints or limitations. These constraints should be identified as part of the planning process for the evaluation. Two major evaluation constraints are time and cost. Evaluation results that are not timely are not useful to program managers and funding agencies. When evaluation information is needed quickly, the evaluation must address fewer questions. Similarly, the financial resources available for the evaluation help to determine its scope. The strengths and weaknesses of various evaluation approaches should be considered while keeping in mind the level of resources available. Learn More... Considering the Evaluation's Constraints (General Accounting Office)
  • 53. http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/cons idering_the_evaluation.htm Planning the Evaluation Are You Ready for Evaluation? Not all programs are ready to be evaluated; that is, they are not able to provide information or otherwise fully participate in the evaluation. To determine whether a program is ready for evaluation, evaluators have developed the process of "evaluability assessment." An evaluability assessment, undertaken prior to an evaluation, is designed to
  • 54. address the question of whether the program can participate fully in an evaluation. Some examples of questions that can be addressed in an evaluability assessment are listed below. Is there a formal program design or model in place? Programs must be able to document their goals and objectives, and the strategies they employ to achieve those goals and objectives. Is the program design or model a sound one? If program goals are unrealistic or strategies are not based in theory or prior evidence, or if program managers cannot explain how the activities and services they provide are expected to lead to the program’s desired outcomes, then evaluation is not a good investment. Can the program participate in the evaluation? Evaluations require data and information. If the program does not collect data, and has no capacity to generate data, then the evaluation will not be successful. Example of an Evaluability Assessment The Youth Monitoring Program Learn More...
  • 55. Assessing Readiness for Evaluation (National Institute of Justice) Determining Whether to Evaluate at All (National Institute of Justice) Evaluability Assessment: Examining the Readiness of a Program for Evaluation (Justice Research and Statistics Association) javascript:loadPOP('evaluabilityassessment.html') http://www.jrsa.org/pubs/juv-justice/evaluability-assessment- appendix.pdf http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/eval uation_strategies_p7_8.html http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/nijg uide.html#determining http://www.jrsa.org/pubs/juv-justice/evaluability- assessment.pdf Selecting Critical Programs (Department of Housing and Urban Development) Time Frame for Evaluation (Office of Juvenile Justice and Delinquency Prevention) Working With an Outside Evaluator One of the first issues that programs need to address when considering an evaluation is whether to use an evaluation expert, and whether that person can be in- house (if such expertise exists) or
  • 56. outside of the agency or program being evaluated. If funds are available, a trained and experienced evaluator can be of great assistance to a program throughout the evaluation process. If in-house expertise is available, the advantages and disadvantages of using this person or an external evaluator must be weighed. Regardless of whether the evaluator is internal or external to the agency being evaluated, finding a qualified evaluator is essential. A qualified evaluator should be experienced in evaluating similar programs; should try to balance the needs and concerns of a variety of decision-makers, including the program managers, with issues related to the objectivity of the evaluation; and should be able to communicate with a wide variety of individuals who have an interest in the results of their work. Learn More... Building Evaluation into a Program RFP and Preparing an Evaluation RFP (Office of Juvenile Justice and Delinquency Prevention) Choosing an Evaluator (Office of Juvenile Justice and Delinquency Prevention) Conducting Evaluations In-House or Under Contract (National Institute of Justice) Hiring and Working with an Evaluator (Justice Research and Statistics Association) Who Should Conduct Your Evaluation? (Department of Housing and Urban Development)
  • 57. http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentb.html#critical http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentg.html#timeframe http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/buil ding_evaluation_into_a_progr.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/buil ding_evaluation_into_a_progr.htm#preparing http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/choo sing_an_evaluator.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap ter_4_nij_guide.htm http://www.jrsa.org/njjec/publications/evaluator.pdf http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap ter_3_housing.htm Developing an Evaluation Plan Once you have determined that you are ready for evaluation and have decided who will conduct the evaluation, the next step is to develop an evaluation plan. An evaluation plan is a description of the evaluation process. Some of the key elements that should be addressed in the evaluation plan include: who is the target audience for the evaluation; what evaluation questions will be asked; how the evaluation will be designed; what data will be collected, how and by whom; and what final products will be produced. The evaluation plan should detail the roles that various
  • 58. individuals will play in the evaluation process; these individuals include the evaluator, the program manager, staff, clients, and any other stakeholders. Opportunities for preliminary review of findings and conclusions should be built into the plan. Learn More... Developing an Evaluation Plan (Department of Housing and Urban Development). Developing an Evaluation Plan (Justice Research and Statistics Association, p. 7) Steps in Planning Evaluations (U.S. Department of Education) Developing and Working with Program Logic Models While there are many forms, logic models specify relationships among program goals, objectives, activities, outputs, and outcomes. Logic models are often developed using graphics or schematics and allow the program manager or evaluator to clearly indicate the theoretical connections among program components: that is, how program activities will lead to the accomplishment of objectives, and how accomplishing objectives will lead to the fulfillment of goals. In addition, logic models used for evaluation include the measures that will be used to determine if activities were carried out as planned (output measures) and if the program's objectives have been met (outcome measures). Why Use a Logic Model?
  • 59. Logic models are a useful tool for program development and evaluation planning for several reasons: • They serve as a format for clarifying what the program hopes to achieve; http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/deve loping_an_evalu.htm http://www.jrsa.org/pubs/juv-justice/briefing_evaluator.html http://www.ed.gov/offices/OUS/PES/primer3.html • They are an effective way to monitor program activities; • They can be used for either performance measurement or evaluation; • They help programs stay on track as well as plan for the future; and • They are an excellent way to document what a program intends to do and what it is actually doing. Learn More About What a Logic Model Is and Why To Use It Developing a Logic Model (The Urban Institute) Developing and Using a Logic Model (The Urban Institute) A Guide on Logic Model Development for CDC’s Prevention Research Centers (Sundra, Scherer, and Anderson) Logic Model for Program Planning and Evaluation (University of Idaho-Extension)
  • 60. How to Develop a Logic Model Developing a logic model requires a program planner to think systematically about what they want their program to accomplish and how it will be done. The logic model should illustrate the linkages of among the elements of the program including the goal, objectives, resources, activities, process measures, outcomes, outcome measures, and external factors. Logic Model Schematic The following logic model format and discussion was developed by the Juvenile Justice Evaluation Center (JJEC) and maintained online by the Justice Research and Statistics Association (www.jrsa.org) from 1998 to 2007. http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/eval uation_strategies_p3_7.html http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop 1-4.html#chap2 http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/cdc- logic-model-development.pdf http://www.uidaho.edu/extension/LogicModel.pdf The following discussion explains the interconnectedness among the elements of the logic model. At the top of the logic model example is a goal which represents a broad, measurable statement
  • 61. that describing the desired long-term impact of the program. Knowing the expected long-term achievements a program is expected to make will help in determining what the overall program goal should be. Sometimes goals are not always achieved during the operation of a program. However, evaluators or program planners should continually re- visit the program's goals during program planning. An objective is a more specific, measurable concept focused on the immediate or direct outcomes of the program that support accomplishment of your goal. Unlike goals, objectives should be achieved during the program. A clear objective will provide information concerning the direction, target, and timeframe for the program. Knowing the difference your program will make, who will be impacted, and when will be helpful when developing focused objectives for your program. Resources or inputs can include staff, facilities, materials, or funds, etc--anything invested in the program to accomplish the work that must be done. The resources needed to conduct a program should be articulated during the early stages of program development to insure that a program is realistically implemented and capable of meeting its stated goal(s). Activities represent efforts conducted to achieve the program objectives. After considering the resources a program will need, the specific activities that will
  • 62. be used to bring about the intended changes or results must be determined. Process Measures are data used to demonstrate the implementation of activities. These include products of activities and indicators of services provided. Process measures provide documentation of whether a program is being implemented as originally intended. For example, process measures for a mental health court program might include the number of treatment contacts or the type of treatment received. Outcome measures represent the actual change(s) or lack thereof in the target (e.g., clients or system) of the program that are directly related to the goal(s) and objectives. Outcomes may include intended or unintended consequences. Three levels of outcomes to consider include: Initial outcomes: Immediate results of a program. Intermediate outcomes: The results following initial outcomes. Long Term: The ultimate impact of a program. External Factors, located at the bottom of the logic model example, are factors within the system that may affect program operation. External factors vary according to program setting and may include influences such as development of or revisions to state/federal laws, unexpected changes in data sharing procedures, or other similar simultaneously running programs. It is important to think about external factors that might change how your program operates or affect program outcomes. External factors should be included during the development of the logic
  • 63. model so that they can be taken into account when assessing program operations or when interpreting the absence or presence of program changes. If-Then Logic Model Another way to develop a logic model is by using an "if-then" sequence that indicates how each component relates to each other. Conceptually, the if-then logic model works like this: IF [program activity] THEN [program objective] and IF [program objective] THEN [program goal]. In reality, the if-then logic model looks like this: IF a truancy reduction program is offered to youth who have been truant from school THEN their school attendance will increase and IF their school attendance is increased THEN their graduation rates will increase. Another way to conceptualize the "if-then" format: • If the required resources are invested, then those resources can be used to conduct the program activities. • If the activities are completed, then the desired outputs for the target population will be produced. • If the outputs are produced, then the outcomes will indicate that the objectives of the program have been accomplished.
  • 64. Developing program logic using an "if-then" sequence can help a program manager or evaluator maintain focus and direction for the project and help specify what will be measured through the evaluation. Common Problems When Developing Logic Models • Links among elements (e.g., objectives, activities, outcome measures) of the logic model are unclear or missing. It should be obvious which objective is tied to which activity, process measure, etc. Oftentimes logic models contain lists of each of the elements of a logic model without specifying which item on one list is related to which item on another list. This can easily lead to confusion regarding the relationship among elements or result in accidental omission of an item on a list of elements. • Too much (or too little) information is provided on the logic model. The logic model should include only the primary elements related to program/project design and operation. As a general rule, it should provide the "big picture" of the program/project and avoid providing very specific details related to how, for example, interventions will occur, or a list of all the agencies that will serve to improve collaboration efforts. If you feel that a model with all those details is necessary, consider developing two models; a model with the fundamental elements and a model with the details. • Objectives are confused with activities. Make sure that items listed as objectives are in fact objectives rather than activities. Anything
  • 65. related to program implementation or a task that is being carried out in order to accomplish something is an activity rather than an objective. For example, 'hire 10 staff members' is an activity that is being carried out in order to accomplish an objective such as 'improve response time for incoming phone calls.' • Objectives are not measurable. Unlike goals which are not considered measurable because they are broad, mission-like statements, objectives should be measurable and directly related to the accomplishment of the goal. An objective is measurable when it specifically identifies the target (who or what will be affected), is time-oriented (when it will be accomplished), and indicates direction of desired change. In many cases, measurable objectives also include the amount of change desired. Other Logic Model Examples Phoenix Gang Logic Model OJJDP Generic Logic Model United Way Program Outcome Model University of Missouri Extension Program Planning and Development Logic Model Learn More About How to Develop a Logic Model Developing a Basic Logic Model for Your Program (The University of Arizona School of Public Health)
  • 66. Enhancing Performance with Logic Models (University of Wisconsin-Extension, Division of Cooperative Extension) Establishing Goals, Objectives and Evaluation Criteria (U.S. Department of Housing and Urban Development) Using the Logic Model for Program Planning (Legal Service Corporation Resource Information) Assessing Program Performance Identifying Goals and Objectives Programs must have clearly specified goals and objectives before an evaluation can take place. A program goal is a broad statement of what the program hopes to accomplish or what changes it expects to produce. Examples of program goal statements include: • Reduce reoffending among substance abusing offenders served by the program • Reduce the crime rate in the neighborhood targeted by the program • Restore a sense of well-being to victims of crime An objective is a specific and measurable condition that must be attained in order to accomplish a particular program goal. There are many different ways to specify objectives; the program and http://www.newfreedomprograms.com/download/gp_logic_mode
  • 67. l.pdf http://www.ojjdp.ncjrs.gov/grantees/pm/generic_logic_model.pd f http://national.unitedway.org/outcomes/resources/mpo/model.cf m http://outreach.missouri.edu/staff/programdev/plm/ http://www.publichealth.arizona.edu/chwtoolkit/PDFs/Logicmod /chapter2.pdf http://www1.uwex.edu/ces/lmcourse/ http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/task _4.htm http://www.lri.lsc.gov/pdf/other/TIG_Conf._Materials/EMcKay _Logic_Model_Intro_LSC.pdf evaluator should choose the method that works best for each situation. Examples of program objectives include: • Assist substance abusing offenders in abstaining from drug use • Ensure that victims of crime feel compensated for their losses • Improve by one grade level reading scores for 80% of the juveniles who complete the program Learn More... Establishing Evaluation Criteria (U.S. Department of Housing and Urban Development) The Logic of Evaluation (Office of Juvenile Justice and Delinquency Prevention) Measuring Performance When There is No Bottom Line (Bureau of Justice Assistance)
  • 68. The Problem of Defining Agency Success (Bureau of Justice Assistance) State your Program Objectives in Measurable Terms (U.S. Department of Housing and Urban Development) What You Expect: Building A Theory of Action (National Institute of Justice, Chapter 2) Measuring Activities and Outputs: Process Evaluation Once a program has identified its goals and objectives, it needs to specify the major activities or processes that it will undertake that will lead to accomplishing these goals and objectives. One component of measuring a program's performance is to determine whether activities were actually implemented as planned. The reason that this is important is that if activities are not implemented as planned, then there is no reason to believe that the activities as they were implemented will produce the desired objectives. The immediate results of activities are referred to as outputs. Output measures are indicators of the degree to which activities were implemented as planned. Examples of output measures include: • Number of offenders receiving counseling services • Number of community service projects completed • Proportion of parolees who receive drug tests http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/step
  • 69. _4.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentg.html#developing-an-effective http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/mea suring_performance_when_there.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/prob lem_of_defining_agency_succe.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/step _3.htm http://www.ncjrs.org/pdffiles1/nij/187350.pdf Process evaluation focuses on program implementation. Process evaluations generally involve: reviewing program documents, interviewing program staff, observing program operations, and collecting data from program files. In addition to collecting data on output measures, process evaluations examine a number of additional questions; for example: • How well were key program elements, such as multiagency collaboration, implemented? • Did the program serve its target group (for example, high risk probationers)? • What was the dropout rate for the program, and how can this rate be reduced? Learn More... Implement a Process Evaluation to Document What is Done, When, by Whom, To Whom (National Institute of Justice) Measurement Issues (Office of Juvenile Justice and Delinquency Prevention)
  • 70. Process Analysis (The Urban Institute) Program Implementation (General Accounting Office) Measuring Outcomes: Impact Evaluation Another component of measuring a program's performance is determining whether the activities produced the desired effects or outcomes or, put another way, whether the program achieved its objectives. Measuring outcomes tells the program and the evaluator what impacts the program has had or what results it has achieved. Such impacts are usually expressed in terms of behavior change in those served by the program: reducing reoffending or increasing knowledge about the negative consequences of substance abuse. Outcomes may be divided into short-term, intermediate, and long-term outcomes, with the last usually being the program goal. There are a number of different ways to define and measure any particular outcome. The choice of a measurement method is critical to the program assessment process. A professional evaluator can be useful in helping to develop and identify valid and reliable outcome measures. Learn More... Basic Outcome Evaluation and Comparative Outcome Evaluation (Office of Juvenile Justice and Delinquency Prevention) Measuring Program Outcomes (Office of Juvenile Justice and Delinquency Prevention)
  • 71. http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impl ement_a_process.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impl ement_a_process.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/mea surement_issues.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop 5-9.html#process_analysis http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentee.html#we-frequently-are http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi c_outcome_evaluation.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentg.html#measuring-outcomes Varieties of Outcome Measures (Office of Juvenile Justice and Delinquency Prevention) Establishing the "Activities-Outcomes" Connection: Evaluation Experiments Performance measurement can and should assess program outcomes. However, in order to establish the connection between a program's activities and observed outcomes, an impact evaluation, in the form of an experiment or randomized controlled trial (RCT), is necessary. The RCT involves assigning individuals randomly to participate in the program, then comparing outcomes for program participants and non-participants. While in theory all programs should be evaluated using RCTs, practical considerations limit their use in many situations. In order to illustrate the advantages and disadvantages of evaluation
  • 72. experiments, three common evaluation designs are reviewed: • Pre-experimental (pre-post) design • Quasi-experimental (comparison group) design • Experimental (control group) design (randomized controlled trial) Learn More... Allocate Sufficient Funds for an Impact Evaluation: If Controlled Experimentation is Infeasible, Approach Less Rigorous Designs with Caution and Imagination (National Institute of Justice) Impact Evaluation Designs and The Impact Evaluation Design 'Decision Tree' (The Urban Institute) Methods of Analyzing Data (National Institute of Justice) Observing Behavioral Outcomes and Attributing Changes to the Program (U.S. Department of Education) Establishing the "Activities-Outcomes" Connection: Evaluation Experiments Quasi-Experimental (Comparison Group) Design In this design, change is assessed by comparing perceptions or behaviors of program participants with those of non-participants (comparison group). If outcomes for the two groups differ in the expected way (e.g., program participants have lower recidivism rates than non-participants), then the evaluator assumes that the difference was caused by the program.
  • 73. The assumption here is that the program participants are exactly like the non-participants in every way except that they received the program services, so any differences between the two http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/vari eties_of_outcome_measures.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/allo cate_sufficient_funds.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/allo cate_sufficient_funds.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop 5-9.html#impact_evaluation_designs http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop 5-9.html#decisiontree http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap ter_3_nij_guide.htm http://www.ed.gov/offices/OUS/PES/primer5.html must be due to the program. In such designs, evaluators often select non-participants who match participants on key factors, such as age, gender, and criminal history. The trouble with this design, however, is that the evaluator can never be certain that the groups are exactly the same on every factor that might lead to differences in observed outcomes. The evaluator can have more confidence in the results of a quasi- experiment than he or she can in the results of the pre-post design, but still cannot be certain that the program activities caused the observed outcomes. Learn More...
  • 74. The Nonequivalent Comparison Group Design (Government Accounting Office) Non-Random Comparison Group (National Institute of Justice, pp. 4.5-4.6) Establishing the "Activities-Outcomes" Connection: Evaluation Experiments Pre-Experimental (Pre-Post) Design The pre-post design measures program outcomes by comparing perceptions or behaviors at the end of the program (post) to some baseline, usually the same elements measured at prior to the start of the program (pre). If program participants change in the expected direction, then the outcomes are said to have been achieved. The difficulty with this design is that it is not possible to attribute any observed changes to the program itself, as opposed to other factors that might have produced the changes. In other words, it is impossible to conclude that the program activities caused the observed outcomes. Learn More... The Before-and-After Design (General Accounting Office) Pre- and Post-Test Scores (National Institute of Justice, p. 4.8) Threats to Validity (Office of Juvenile Justice and Delinquency Prevention)
  • 75. http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentbb.html#before-and-after http://www.ncjrs.org/pdffiles1/nij/187350.pdf http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentg.html#evaluation-design Establishing the "Activities-Outcomes" Connection: Evaluation Experiments Quasi-Experimental (Comparison Group) Design In this design, change is assessed by comparing perceptions or behaviors of program participants with those of non-participants (comparison group). If outcomes for the two groups differ in the expected way (e.g., program participants have lower recidivism rates than non-participants), then the evaluator assumes that the difference was caused by the program. The assumption here is that the program participants are exactly like the non-participants in every way except that they received the program services, so any differences between the two must be due to the program. In such designs, evaluators often select non-participants who match participants on key factors, such as age, gender, and criminal history. The trouble with this design, however, is that the evaluator can never be certain that the groups are exactly the same on every factor that might lead to differences in observed outcomes. The
  • 76. evaluator can have more confidence in the results of a quasi- experiment than he or she can in the results of the pre-post design, but still cannot be certain that the program activities caused the observed outcomes. Learn More... The Nonequivalent Comparison Group Design (Government Accounting Office) Non-Random Comparison Group (National Institute of Justice, pp. 4.5-4.6) http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentbb.html#nonequivalent-comparison http://www.ncjrs.org/pdffiles1/nij/187350.pdf Establishing the "Activities-Outcomes" Connection: Evaluation Experiments Experimental (Control Group) Design (Randomized Controlled Trial) As in the quasi-experiment, a randomized controlled trial (RCT) involves comparing program participants and non-participants. In order to ensure equivalence, the RCT involves randomly assigning participants to groups. This means that which offenders receive program services and which do not is decided not by a judge or other criminal justice administrator, but by the
  • 77. evaluator. This random assignment procedure is the best way of ensuring that there are no differences between program participants and non-participants except for the program services provided to the former group. This design, however, cannot always be employed to assess criminal justice initiatives. For some initiatives, like community-wide efforts and multijurisdictional law enforcement drug task forces, assigning cases randomly is not feasible. In other cases, judges and other criminal justice administrators may refuse to surrender their discretion in the interests of sound evaluation practice. Learn More... Random Assignment (National Institute of Justice, pp. 4.3-4.4) The True Experiment (General Accounting Office) Use of Random Assignment (Office of Juvenile Justice and Delinquency Prevention) Data Collection New or Existing Data? Most programs collect some information that is potentially useful for evaluation. At the outset, the evaluation needs to assess what data already exist, what the quality of the data are, and whether they are readily available in a useable form. The answers to these questions will help to determine whether existing data can be used, or whether new data must be collected.
  • 78. http://www.ncjrs.org/pdffiles1/nij/187350.pdf http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu mentbb.html#true-experiment http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/use_ of_random_assignment.htm When planning an evaluation, the evaluator must determine whether existing or new data will be used in data analysis. The advantage of using new data is the greater control an evaluator has over the measures, procedures, and data collection staff, which can contribute to greater reliability and validity of the data. Using existing data has the advantage of cost savings, because time, effort, and other resources are not spent on collecting new data. Learn More... Data Collection (U.S. Department of Housing and Urban Development) How Do You Get the Information You Need for Your Evaluation? (U.S. Department of Housing and Urban Development) Obtaining Information for Evaluations - Use Existing Data or Collect New Information? (National Institute of Justice) Using Existing Data Sometimes evaluators are able to use information that already exists without going through the
  • 79. expensive and time-consuming process of collecting new data. Information collected by the program for a variety of purposes may have value for performance measurement and evaluation. Evaluators can often make relatively small changes in the program's practices and procedures that will result in data that can be more readily used for evaluation. Examples of existing data on program participants that might be able to be used for evaluation include: • Attendance records • Counseling forms and progress notes • Discharge summaries • Presentence investigation reports • Psychological testing and other classification information Learn More... Ensuring That Evaluations Yield Valid and Reliable Findings (U.S. Department of Education) Verifying the Accuracy of the Data (U.S. Department of Housing and Urban Development) http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/task _6.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap ter_6_how_do_you_get.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/chap ter_2_nij_guide.htm http://www.ed.gov/offices/OUS/PES/primer6.html http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/veri fying.htm
  • 80. Using New Data Even if some evaluation data are currently collected, they will often need to be supplemented by the collection of additional data. These new data can be collected through various strategies: Direct Observation Obtaining data by on-site observation has the advantage of providing an opportunity to learn in detail how the project works, the context in which it exists, and what its various consequences are. However, this type of data collection can be expensive and time- consuming. Observations conducted by program staff, as opposed to an outside evaluator, may also suffer from subjectivity. Interviews Interviews are an effective way of obtaining information about the perceptions of program staff and clients. An external evaluator will often conduct interviews with program managers, staff members, and clients to obtain their perceptions of how well the program functions. A disadvantage to conducting interviews is that they can be time- consuming and costly, and produce subjective information. Surveys and Questionnaires Surveys and questionnaires can provide information on program staff members' perceptions of program operations and their own functions.
  • 81. Surveys of clients can provide information on attitudes, beliefs, and self-reported behaviors. An important benefit of surveys is that they provide anonymity to respondents, which can reduce the likelihood of biased reporting and increase data validity. A variety of issues are associated with the use of surveys and questionnaires, including reading level, cultural bias, and sensitivity to particular wording. Official Records Official records and files are one of the most common sources of data for criminal justice evaluations. Arrest reports, court files, and prison records all contain much useful information for assessing program outcomes. Often these files are automated, making accessing these data easier and less expensive. Learn More... Basic Guidelines for the Development of Survey Items (Office of Juvenile Justice and Delinquency Prevention) Data Collection Strategies (The Urban Institute) Developing and Using Questionnaires (General Accounting Office) http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/basi c_guidelines_for_the_develop.htm http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/stop 5-9.html#data_collection_strategies http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/docu