Historical philosophical, theoretical, and legal foundations of special and i...
Social Change, Leade.docx
1. Social Change, Leadership, and Advocacy
Social Change, Leadership, and Advocacy
Program Transcript
NARRATOR: Change is a process that involves leadership and
collaboration.
Listen as Dr. Judy Lewis explains the change process, some of
the barriers to
effective change, and the ethical considerations counselors must
keep in mind
2. when pursuing change.
JUDY LEWIS: Change is around us all the time. The leaves
change in the fall.
The weather changes with the seasons. Change is so much a part
of life that
sometimes it seems as though the difficult part is being able to
respond and react
and adjust to it.
But what we're really talking about here is purposeful change. A
person, or
people, decide that there is a gap between what is and what
should be. There's a
gap between the real and the ideal. What they want to do, then,
is to find a way
to bridge that gap. To find a way to make change so that the real
and the ideal
come closer together.
Now, once you try to do that, that's the farthest thing from easy.
It's always very
difficult to bring about change.
First you have to look at how you can prove that the need for
change is there.
You have to have a needs assessment. Why do we have to have
this change?
You have to come up with data that indicate that the change is
necessary and
possible.
But then, even though you may have the most logical story in
the world, you still
come up against barriers. There will be people who are
uncomfortable with the
4. Social Change, Leadership, and Advocacy
Sometimes though, we realize that the solution isn't necessarily
just in the hands
of the client. Sometimes, we realize that there are barriers that
are preventing the
client from achieving his or her goals. That's when we get into
social change. And
we have, I think, a particular talent for that, once we get
comfortable with it.
Another thing that I think is important about counselors, in
particular, and social
change, is that, once we have that needs assessment, we do have
the skill that it
takes to bring about change. We're used to having the kinds of
interpersonal
connections and communication styles that really are at the
heart of effective
change.
So we have a reason to be involved in change, and we have
some skills that help
us to do a good job when it comes to change.
5. I think of so many examples of when counselors, for instance,
have been
involved in social change projects that have been effective.
There's one example
in particular that I've always found interesting. And that is, a
group of mental
health counselors in a town in the Southwest. They were
working with families of
young people who were involved in the juvenile justice system.
What they had been told was that the problem was that these
families didn't
seem to be motivated, that they didn't show up with their
appointments when they
were supposed to see their probation agents. That they didn't
follow through
when they were supposed to go to various offices to take care of
advocacy for
their kids.
Now, what happened was that the counselors kept seeing this
again and again
with so many families. And they kept being told, oh, these
families just aren't
motivated. But they realized that there were other things that
were getting in the
way.
One thing was that a lot of the offices were only open during
the day when the
parents had to work.
Another thing was that sometimes there were long distances that
they had to go
and they couldn't afford it. They didn't have cars or they
couldn't afford the gas.
7. Social Change, Leadership, and Advocacy
This was really an amazing example that points up some real
issues. One is that,
you can't always know in advance what kind of change is
needed. That idea
about your clients being the needs assessment, that really came
into play there.
Who would've guessed, if counselors started thinking about
what's needed in this
community, that what was needed was a change in the bus
routes? But it was.
And because they were listening to their clients, listening to the
families that they
were working with, they were able to get involved in making a
difference.
There's another point, I think, that's important here with this
example, too. And
that is that, when you look for the source of the problem, there's
a lot of pressure
to look at the source of the problem within the client or within
the family.
But if you open your mind as a helping professional, and look
beyond the
individual and beyond that family, you can see that sometimes
the source of the
problem is in the community. And then you can have a positive
8. impact on a lot of
clients by making a change with other community members.
A professional code of ethics has some clout in the profession.
What that means
is that there is a strong expectation, a requirement, that people
who belong to a
particular profession do their work in a way that adheres to the
code of ethics.
So, if an ethical code of a profession says, for instance, that
counselors must be
aware of the importance of multi-culturalism and diversity in
working with their
clients; if the ethical code says that counselors must be able to
use advocacy as
needed on behalf of their clients; if the ethical code says that
clients need to be
seen, not as totally the source of the problem, but maybe as
victims of a problem
in the community; if the ethical code says that that's an
expectation for what
counselors should be able to do, then I think it happens.
Traditionally, the codes of ethics across most of the helping
professions have
dealt with, say, multi-culturalism and diversity, not from an
action orientation, but
from a negative. In the sense that most of the codes of ethics
would say, a
counselor will not discriminate against the client based on race
or gender, say. It
wouldn't say, the counselor must stick up for the client when the
client is
discriminated against by someone else.
10. That reflects a change. And it's an orientation of codes of ethics
toward change.
And when that happens, then I think that professions will move
in that direction.
Because it's through the code of ethics that counselors and other
professionals
know what it is they have to do in order to be meeting their
professional
standards.
Social Change, Leadership, and Advocacy
Additional Content Attribution
FOOTAGE:
GettyLicense_85184635 (Spring)
[Derek Wood Photography]/[Moment]/Getty Images
GettyLicense_96838741 (Summer)
[Vyacheslav Osokin]/[E+]/Getty Images
GettyLicense_454395085
[MaxRiesgo]/[iStock / Getty Images Plus]/Getty Images
GettyLicense_86485207
[Thinkstock Images]/[Stockbyte]/Getty Images
GettyLicense_118657765
12. Self-Determination Theory
Evaluation anxiety
Human service programs
Cultural responsiveness
Foundational theory
Systems thinking
A B S T R A C T
This paper offers a framework for using a systems orientation
and ‘‘foundational theory’’ to enhance
theory-driven evaluations and logic models. The framework
guides the process of identifying and
explaining operative relationships and perspectives within
human service program systems. Self-
Determination Theory exemplifies how a foundational theory
can be used to support the framework in a
wide range of program evaluations. Two examples illustrate
how applications of the framework have
improved the evaluators’ abilities to observe and explain
program effect. In both exemplars
improvements involved addressing and organizing into a single
logic model heretofore seemingly
disparate evaluation issues regarding valuing (by whose values);
13. the role of organizational and program
context; and evaluation anxiety and utilization.
� 2009 Elsevier Ltd. All rights reserved.
Contents lists available at ScienceDirect
Evaluation and Program Planning
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c
a t e / e v a l p r o g p l a n
1. Introduction
Human service program outcomes depend on the relationships
within and between systems that surround two groups of
people—
program providers and the individuals they target. Therefore
evaluation of these programs involves either measuring or
controlling for the functionality of these relationships or
making
assumptions about them. These assumptions might include, for
instance, that program administrators support the successful
functioning of program providers; that program providers have
interest in providing program services; that individual partici-
pants’ family and community systems interact with the program
to
support the intended outcomes; that there is a positive relation-
ship between program providers’ targeted outcomes and indivi-
dual participants’ more generalized well-being; and that the
evaluation process has a ‘‘do no harm’’ relationship to
achieving
program outcomes. The evaluation literature has provided little
guidance for systematically measuring or controlling for the
effects
of these relationships. This paper addresses that gap by introdu-
14. cing a framework that helps evaluators organize, define,
measure,
and integrate these otherwise assumed effects into a typical
outcomes-based evaluation design.
* Tel.: +1 614 570 6711.
E-mail address: [email protected]
0149-7189/$ – see front matter � 2009 Elsevier Ltd. All rights
reserved.
doi:10.1016/j.evalprogplan.2009.06.005
A feature that distinguishes evaluation researchers (i.e.,
researchers specifically trained in the evaluation discipline)
from
researchers trained in other disciplines who conduct program
evaluations is that evaluation researchers have been highly
concerned with identifying these influential, and too often
ignored,
contextual relationships. They have sought to account for them
in
ways that will enhance evaluation validity and utility. For
instance,
issues such as valuing (by who’s values); the role of
organizational
and program context in the achievement of desired outcomes;
and
evaluation anxiety and utilization have been central to dialog
within the American Evaluation Association (Chen, 2004;
Donald-
son, Gooler, & Scriven, 2002; Mark & Henry, 2004; Scriven,
1999;
Wandersman, Imm, Chinman, & Kaftarian, 2000). Despite the
acknowledgement of the importance of these issues, in all too
many program evaluations, the underlying assumptions – each
related to at least one of these important evaluation issues –
remain undefined and unmeasured. But, as evaluation
researchers
15. are aware, accepting these assumptions as true and constant can
severely compromise the meaning, utility, and consequences of
evaluation results.
One means for addressing these assumptions and related issues
has been to introduce systems thinking to program evaluation
(Cabrera, Colosi, & Lobdell, 2008; Williams & Imam, 2007).
These
systems-thinking approaches can be infused into any discipline
(e.g., engineering, philosopy, social science, economics, etc.) or
type of evaluation. As Cabrera and colleagues explain, systems
thinking is ‘‘based on contextual patterns of organization rather
mailto:[email protected]
http://www.sciencedirect.com/science/journal/01497189
http://dx.doi.org/10.1016/j.evalprogplan.2009.06.005
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–8068
than specific content. For example, systems thinking balances
the
focus between the whole and its parts, and takes multiple
perspectives into account. . . [It is thinking that] transgresses
parts
and wholes, takes new perspectives, forms new relationships,
and
makes new distinctions (p. 301).
Cabrera and colleagues have synthesized the many methodo-
logical systems approaches (e.g., soft systems, critical systems,
complex adaptive systems, etc., many of which have been
described in Williams & Imam, 2007) into four simple systems-
thinking rules involving Distinctions, Systems, Relationships
and
Perspectives (D–S–R–P). They have shown how these rules,
16. used
for defining and understanding systemic patterns, are important
to
evaluation research. Within that effort, Wasserman (2008) has
provided a guide for applying these D–S–R–P rules to human
service program evaluation (Table 1). This paper furthers that
guide.
This paper’s first section demonstrates how building a program
model with a systems orientation invites a more general use of
social science theory than has been described to date in the
theory-
driven evaluation literature. The term foundational theory will
be
used to reference this broader use of social science theory. The
paper’s second section delineates the operative distinctions,
systems, relationships and perspectives of human service pro-
grams. The third section introduces Self-Determination Theory
(SDT, Ryan & Deci, 2000b) as an example of how a
foundational
theory guides the selection, measurement, and analysis of the
operative relationships and perspectives. Finally, two exemplar
evaluations designed and implemented by The Ohio State
University Center for Family Research illustrate how use of this
foundational theory-based framework has enhanced the evalua-
tors’ ability to observe and explain program effect while being
Table 1
Operative distinctions, relationships, and perspectives of human
service programming
Provider system Target sy
Distinctions between
nested parts (from least
17. to most encompassing)
Program goals and objectives; program
activities; program providers;
administrators; funders; community
stakeholders; macroenvironment and
relationships between them.
Personal
condition
family, fr
macroenv
between
Operative relationships Funders to administration;
administration
to providers; provider, administrator,
and funders to program objectives;
program objectives and resources to
program activitiesa; providers to
program activity; funder, and administration
effect on program activity; program
18. activity effect on funder, administration
and providers.
Targeted
condition
condition
Target en
condition
on target
Perspectives View of the various relationships to
program activities and program objectives
View of t
existing c
BY BY
Program providers Targeted
and comm
makers a
Administrators
Funders
19. Macrosystem stakeholders
Related Evaluation Research Organizational assessment;
performance
assessment and evaluation; organizational
development studies; program monitoring.
Needs ass
assessme
epidemio
a Relationship explained by causative theory.
more responsive to the needs, influences, and values of the
program stakeholders, i.e., the individuals who comprised each
program’s provider–recipient systems.
2. The Need For A Systems Orientation And
Foundational Theory
Program evaluators have come to understand the value of
explicating relationships between program resources, activities,
and effects. To this end, program logic models have been
adopted
widely (Hatry, Van Houten, Plantz, & Greenway, 1996;
Knowlton &
Phillip, 2009; W K Kellogg Foundation, 2004) as the tool for
articulating those relationships. By revealing processes within
the
black box between program activities and outcomes, logic
models
provide an important tool for program evaluation and quality
improvement (Kaplan & Garrett, 2005; McLaughlin & Jordan,
20. 1999;
Rogers, 2000; Savaya & Waysman, 2005).
In the language of theory-driven evaluation (Chen, 2004), these
components and processes constitute a program’s change model.
When based on an underlying formal or informal causative
theory
that establishes how the posited relationships lead to intended
outcomes, these models can become more valid, informative,
and
useful (Chen, 1990, 2004; Stame, 2004). However, even theory-
driven change models have been criticized for their limited
ability
to explain complex, sometimes contradictory, program results
and
the influences that produce them (Davies, 2004; Gasper, 2000;
Rogers, 2000).
Recognizing the limitations, evaluation theorists have long
acknowledged the need to augment change models with the
additional context and feedback variables of what Chen (1990)
labeled the ‘‘action model.’’ (Altschuld & Kumar, 1995;
Cronbach,
systems with related areas of evaluation research.
stem(s) Human service program system
goals and objectives; existing
s; targeted individual(s);
iends, and community;
ironment; and relationships
them.
21. Program outcomes; program participation;
and all nested parts and relationships of
the provider and target systems that
affect and experience effect of the
program service.
individual’s effect on existing
s and effect of existing
s on targeted individual.
vironment affect on existing
s; effect of existing conditions
environment.
All operative relationships listed in provider
and target systems. In addition: participant
to program provider; participant to program
activities; participant’s environment to
program activities; program activities to
program outputs and outcomesa; provider
environment (funders, administration,
22. other stakeholders) to program evaluation
results; participant environment
(family, friends, community) to program
evaluation results.
he relationships to targeted
onditions
All operative perspectives listed in provider
and target systems.
In addition:
individuals; family, friends,
unity; macrosystem policy
nd resource providers.
View of the value of program activities
View of the value of expected and
unexpected outcomes
Response to evaluation feedback
BY
Program participants
23. Influential members among program
participants, family, friends, or community.
Program providers, administrators, funders, etc.
essments; risk and asset
nts; behavioral research;
logical studies.
Formative and summative human service
program evaluations.
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–80 69
1982; Davies, 2004; Rogers, 2000; Schalock & Bonham, 2003).
However, these authors have provided limited direction for
doing
so. Often, guides for evaluators have lumped contextual factors
together under generic terms such as ‘‘influential factors’’ (W
K
Kellogg Foundation, 2004) or ‘‘ecological context’’ (Chen,
2004).
One handbook for creating logic models goes a bit further,
dividing
‘‘external factors’’ into political environment, economic
situation,
social/cultural context and geographic and other constraints
(Innovation Network and Inc., 2009). If any of these potential
modifiers lie outside the explicit causative theory of why and
24. how
activities produce outcomes, there is no systematic way of
identifying them other than informally answering fairly
arbitrary
questions like those that explain the categories in the handbook,
e.g., ‘‘Is bad weather likely to interfere with service delivery?’’
Or,
‘‘Are you working in a community that welcomes your
program?’’
(p. 20).
Along with contextual factors, unexpected outcomes also occur
outside of the change model. Recognizing the importance of
tracking these unexpected outcomes, proponents of goal-free
evaluation approaches urge the avoidance of logic models
completely (Scriven, 1991). In other words, an evaluation based
on a delineated system excludes results occurring outside the
system. A challenge therefore is to find a way to systematically
identify indicators that expand the evaluation beyond the
change
model’s boundaries and identify theory that explains them.
Literature on theory-based evaluation provides some guidance
to this expansion. Donaldson (2007) demonstrates how
contextual
variables can be statistically modeled as moderators (i.e. con-
textual influences) of either the mediator-outcome relationship
or
the program-mediator relationship. In both cases, the mediator
is
defined as program outputs or shorter-term outcomes. Chen’s
(2004) change-model/action-model conceptual framework
defines
the change model as consisting of intervention, intermediating
‘‘determinants,’’ and outcomes. In turn, the action model
consists
25. of contextual system parts with logical links from
‘‘implementing
organizations and implementers’’, through ‘‘associated
organiza-
tions’’ and ‘‘ecological context,’’ to the ‘‘intervention and
service
delivery protocols’’ and the ‘‘target population’’ (p. 29). In this
framework, ‘‘causative’’ theory explains the change model (i.e.,
how certain conditions generate or influence targeted effects)
and
‘‘normative’’ theory explains the action model (how various
components work together to support the change model).
Additionally, Chen’s framework includes feedback loops both
within and between the change and action models.
Using established social science theory, where it exists, has
been a useful and often preferred approach to explaining the
change model (Chen, 2004). At times–as is the case with multi-
systemic therapy (Henggeler, 1999), for instance–the social
science theory underlying the change model involves an array
of factors often included in the action model (e.g., family and
community involvement). However, even in these cases, also
operative is an additional array of action-model assumptions not
included in the change model (e.g., those related to fidelity).
From
Chen’s perspective at the time of writing, he noted that although
he
would welcome social science theory to explain action models,
social science theory appeared to be limited only to explaining
change models. He wrote:
The action model deals with nuts and bolts issues, which are not
a major topic in most modern social science theory, perhaps due
to
the social sciences’ emphasis on developing generalizable
proposi-
26. tions, statements, and laws. ‘‘How to’’ program issues tend to
be
trivialized by contemporary social science theory. Plus, the
action
model has no proposition-like format resembling that defined by
and familiar to modern social scientists (Chen, 2004, p. 18).
Without a systems orientation and the consequent systematic
way of detailing operative relationships and perspectives,
proposi-
tion-like formats are indeed difficult to formulate from the
action
model. But when contextual factors are understood with a
systems
orientation and with Cabrera et al.’s (2008) four D-S-R-P rules
(referenced above), propositions emerge and social science
theory
that explains them can be selected and applied. The selected
social
science theory to explain an action model and its relationship to
the change model is necessarily different than the social science
theory used as causative theory to explain the change model
itself.
The systems-defined propositions of a change model call for a
foundational theory that more generally explains the ‘‘how-to’’
relationships between distinct parts and operative perspectives
of
any human system, and how these perspectives and relationships
function to affect the interaction between systems. Foundational
theory is far less specific than what Chen (2004) describes as
the
normative theory that explains the action model. Normative
theory defines and describes specific components or ‘‘nuts and
bolts’’ that support and contribute to the change model. Founda-
tional theory explains why and under what conditions those
components function and how their quality can be evaluated.
27. Foundational theory also enhances causative theory. Whereas
causative theory explains why and how program activities will
lead to intended outcomes, foundational theory explains why
and
under what conditions the causative theory will be valid. For
instance, a causative theory may explain how a given
curriculum
produces learning or how a treatment regimen inhibits a disease:
those are the mechanisms by which specific programs function.
Foundational theory, which generally explains how human
systems function and interact, explains the assumptions that
make causative theories valid: e.g., when a student will pay
attention to a curriculum or how to know when inhibiting a
disease
is in line with the intentions of the patient.
Supported by measurements made possible through founda-
tional theory, relationships found within and between the
change
and action models become more informative to the evaluation.
In
addition, a systems orientation enhances both the change and
action models with the concept of perspective. Explanation of
program system relationships based on causative and normative
theory alone involves only a singular perspective, one agreed to
by
whatever combination of stakeholders have contributed to the
evaluation design, however wide and diverse that group might
be.
A systems orientation reminds the evaluator to consider that,
within that singular perspective, a full range of perspectives
operate. A systems orientation suggests that the quality of
system
relationships not only varies, but varies across perspectives and
that the variance within each perspective may differently affect
28. outcomes. Foundational theory explains how and why, the
perceptions, definition, and value of these relationships vary
within and between perspectives.
Selection and application of foundational theory, however, is
dependent on being able to develop specific propositions that
the
theory explains. The next subject to be addressed, therefore, is
the
systematic delineation of the components of a human service
program system that require those propositions: the distinctions,
relationships and perspectives.
3. A systems orientation: defining human service program
system distinctions, relationships, and perspectives
For the purpose of describing human service program systems,
a human service program will be defined here as any situation
wherein one human system intentionally attempts to affect at
least
one human being nested within an otherwise virtually indepen-
dent ‘‘target’’ system (Wasserman, 2008). What makes this
situation a program is the distinction between the provider
system
and at least one target system and the intentional nature of the
relationship between them. From a systems-thinking
perspective,
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–8070
evaluating the functionality of a human service program
therefore
involves possible analysis of patterns that emerge from distinc-
tions, relationships and perspectives (Cabrera et al., 2008)
29. formed
from or functioning in relation to the interaction between the
provider and target systems. The components of each of these
systems along with the new components defined by the
interaction between them are listed in Table 1 (first introduced
in Author, 2008).
The provider’s system constitutes one set of relationships
(Table 1, column 2) included in a human service program
system.
In its simplest form, this set of nested relationships houses the
program activities—produced by a program provider within a
program within an organization within a larger environment of
practices, programs, policies, resources and norms (the
program’s
macroenvironment). The target system comprises a second set
of
relationships (Table 1, column 3) included in the program
system.
Certain conditions exist within or around any of the program’s
targeted individuals. Each is nested within a family or
community,
and within other programs (social, medical, bio-behavioral,
educational, etc.), all of which are also situated in a
macroenvir-
onment of practices, policies, resources and norms, some over-
lapping but often independent of those affecting the provider
system.
Analysis of either a provider or target system alone is
insufficient for a human service program evaluation which
depends on analysis of both systems along with additional
analysis
of relationships formed as a result of one system’s intention to
affect the other (Table 1, column 4). Studies involving only the
provider system (i.e., where there is no independence between
30. provider and target systems) can be considered organizational
assessments or organizational development evaluations; those
involving only the target system might be considered needs
assessment, risk or asset assessment, assessment of the person-
in-
environment fit, or perhaps even a form of interactional
behavioral
research. Studies involving the target system’s unrequited inten-
tional relationship with the provider system might also be
considered needs assessment. As conceptualized here, a focus
on interactions within and between both provider and target
systems and independence between those systems is necessary
for
the research to be considered program evaluation.
Defining and viewing a human service program as simulta-
neously existing within the two nested provider and target
Fig. 1. A generic program model with eight pulse points
systems creates a way for an evaluator to systematically scan a
full
range of relationships available for measurement, and then to
identify, select, and focus on those relationships important to
answering evaluation questions of interest. In this
conceptualiza-
tion of program systems, outcomes exist in the context of
program
participation. Both participation and outcomes are nested within
the relationship between a program’s activities and a
participant’s
existing conditions, each of which are nested in their own
respective systems. Moreover, the program system includes
formal
and informal program evaluation feedback. For providers this
feedback arrives both directly and indirectly through adminis-
trators and other stakeholders. For participants, feedback about
program success arrives usually indirectly through program
31. providers, family, friends, and community.
A generic model of these relationships is presented in Fig. 1. In
addition to the relationships explained by change models
(comprised of inputs, activities, outputs and outcomes; shown
shaded in Fig. 1), this systems-based model includes the action
model in the form of operative system distinctions (shown in
white
boxes in Fig. 1 and summarized in Table 1, column four),
relationships between them (black arrows) and evaluation feed-
back relationships (gray arrows). This generic model postulates
that outcomes are mediated by outputs which are in turn
mediated
by participants who are influenced not only by program
activities,
but also by both program climate and family/community
climates.
Moreover, in addition to being the result of program resources,
program activities are mediated by providers who are influenced
by the nature of their environments and how they are affected
by
program objectives, resources, etc. In addition to producing
program activities, program providers affect (along with the
organizational climate) program climate which in turn
influences
participants’ receptivity to program activities. Finally, the
model’s
design emerges from the acknowledgement that acknowledges
that outcomes vary in their value to participants (shown in the
model as ‘‘outcome effect’’) and therefore also vary in their
longer
term impacts.
Each of these relationships will be explained in more detail
below. However, given that they exist, and given theory that
explains how they contribute to the functionality of the systems
32. with which they are involved, it is possible to map eight
potential
measurements (numbered in Fig. 1) or what will be called
‘‘pulse
points,’’ because they indicate a human service program
system’s
for measuring inter and intra-system functioning.
Table 2
How causative, normative, and foundational theories contribute
to the explanation of program system relationships (as
organized by pulse points).
Causative theory explains: Normative theory explains:
Foundational theory explains
Overall purpose How participants’ interaction with
program activities is expected to
produce targeted outcomes.
How context and feedback variables are
expected to influence the program
and outcomes.
How (and why) the perception,
definition, and value of the
relationships vary within and
33. between perspectives.
Program model Change model. Action model. How various
perspectives affect
and respond to the effectiveness
of the distinctions and
relationships in both change
and action models.
Pulse point relationship
#1 Participant to outcome Intended intermediate and longer-
term
outcomes and how they can be measured.
(Assumes the targeted outcome is
functional)
How human beings value changes
in attitudes, skills, behaviors, etc.
#2 Participant to program activities The amount and nature of
interaction
with activities necessary to produce
outcomes.
Contextual influences expected to affect
34. the quality of the activities.
How human perception/experience
of an activity affects the outcomes
the activity produces.
#3 Participant to provider Expected quality of the
provider–participant relationship.
Contextual influences expected to affect
the quality of the provider–participant
relationships.
How human perceptions of
relationships affect the
relationship and its outcomes.
#4 Family, community, and
other programs on the
participant’s program outcomes
(Assumes relationship is functional) Expected family,
community and other
program influence on program activities,
35. program participation, or outcome
sustainability.
Quality of influence of support
networks.
#5 Family, community, and other
programs’ functionality as buffers
of formal and informal
evaluation results
(Assumes relationship is functional) Expected social network
participants’
response to evaluation results and how
those responses affect the production
of targeted outcomes.
Human response to performance
indicators and its affect on
motivation, productivity, etc.
#6 Providers to their outputs
(program activities)
(Assumes relationship is functional) Expected contextual
36. influences on
providers’ abilities to produce program
activities.
How human perception/experience
of an activity affects the outcomes
the activity produces.
#7 Providers to sponsoring
organization
(Assumes relationship is functional) Expected organizational
supports for
providers’ ability to produce program
activities.
How human perception/experience
of the workplace affects motivation,
productivity, creativity,
adaptability, etc.
#8 Provide’rs functionality as
a buffer of evaluation results
(Assumes relationship is functional) Expected provider response
37. to evaluation
results and how those responses will
affect the production of outcomes.
Human response to performance
indicators and its affect on
motivation, productivity, etc.
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–80 71
overall ‘‘health.’’ While it is probable that additional pulse
points
exist in any given program to be evaluated, these eight provide
a
schema for initial planning.
Table 2 lists the contribution of causative, normative and
foundational theories to understanding each relationship. As
such,
it provides the reader with an understanding of the unique role
of
each type of theory. Listed below are brief descriptions of the
relationships along with a synopsis of what their foundational
theory analysis can contribute to an evaluation.
Pulse point #1, assessing the relationship of the participant to
the
outcome, generates information about the value of the outcome
to
the participant (for example, how a stressed, anorexic
adolescent
student experiences achieving a 4.0 average). This measure
38. modifies negative effects of evaluations that reward the
achieve-
ment of narrowly focused outcomes while disregarding broader,
potentially negative unintended and unmeasured consequences
of
achieving those outcomes. For instance, some evaluators and
evaluands might consider high stakes testing in schools to be an
example of accountability measurement that could benefit from
being further qualified by the value of the results to the program
participant, to the provider, or the target systems in general.
Pulse point #2, the relationship of the participant to program
activities, leads evaluators to question the validity of attributing
to
the program, outcomes achieved in the absence of cooperative
and
productive relationships between the participant and program
activities. For example, in the case of a high achieving student
bored or angered by the activities, the evaluator would be hard
pressed to claim that these activities led to durable, positive
outcomes. More likely, the ‘‘successful’’ student achieved the
outcomes despite, rather than because of, the activities.
Pulse point #3, assessment of the relationship between the
participant and the provider is an additional source of variance.
Consider, for instance, the effect of the physician–patient
relation-
ship on health or of the teacher–child relationship on school
achievement. Explanations for what makes these relationships
successful as perceived from varying perspectives inform both
their measurement and strategies to improve program results.
Each of these three pulse points describes system distinctions
and relationships found in the change model. Five more pulse
points address relationships more typically found amid the
context
and feedback variables of action models:
39. Pulse point #4, describes the influence of family, community,
and
other programs on the participant’s program outcomes.
Generally
addressed in evaluation literature as program context, these
relationships usually involve practical aspects of programming
and
family resources such as transportation, network support,
medical
and educational services, and other financial resources.
However,
program context can also include less tangible influences such
as
family and community values in relation to program values or
even
overall emotional support from family and community members
or from outside service providers (e.g., school teachers, medical
providers, or counselors). The perspectives involved in a
systems
orientation acknowledge that the quality of these family and
community support relationships not only affects outcomes but
in
turn is affected by them.
A range of how both formal and informal evaluation results
(pulse point #5) are received by the families, communities, and
other service providers often determines program effectiveness.
Consider, for instance, how parent response to student report
cards
can affect both the students and the school system.
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–8072
40. Other types of contextual relationships involve organizational
context and more specifically, the productivity of the program
providers. Pulse point #6 describes the relationships of the
providers
to their outputs (which are the program activities of the change
model). Although variance in program activities can be
minimized
with standards, guidelines and even regulations, the quality of
program activities and their ability to produce outcomes still
depends on the relationship of the human being producing the
activities to the conditions of producing them. How a provider
experiences her or his work may affect outcomes as much as the
protocol for the work itself.
Similarly, a provider’s performance is influenced by the support
received from the organization that administrates the program.
Pulse point #7, the quality of support from an organization to a
provider, defines that relationship. But like the relationship
between a family support system and a program participant, the
quality of support from an organization to a provider also can
be
influenced by evaluation feedback. Pulse point #8 measures the
effect of evaluation results on the providers’ production of
outcomes.
Measuring any one of the pulse points will provide an indicator
for how well that relationship is supporting the overall
function-
ality of a given provider–recipient system, i.e., its ability to
produce
outcomes that further the functionality of its sub-systems. For
each
pulse point, a broad range of questions might be asked. For
instance, with pulse point #8, the effect of evaluation results on
the
providers’ production of outcomes, stakeholders might be
41. concerned with how evaluation feedback changes the nature of
the outcomes, how it changes the physical resources available to
program providers, or even how it affects the pressures and
stress
placed on providers. Only a rare – and well funded – human
service
program evaluation would address all eight pulse points.
Instead,
this framework of systems-oriented considerations provides a
systematic way for evaluators, as they design program
evaluations,
to consider a fuller scope of key evaluation concerns.
As shown, a wide range of propositions can be formulated for
each of these pulse points. Foundational theory will guide the
process. One example of a foundational theory is Self-
Determina-
tion Theory, an organismic systems-based motivational theory
(Ryan, 1995; Ryan, Kuhl, & Deci, 1997) that explains
productivity
and achievement within and between human systems. The
remainder of this paper will use Self-Determination Theory to
illustrate how a foundational theory and a systems orientation
can
be used to formulate testable propositions and thereby organize
and enhance human service program evaluations. A description
of
the theory will be followed by how it is being used to support
change and action models in two diverse evaluation projects.
4. Self-Determination Theory: an example of
a foundational theory
Based on thirty years of human motivation research, Self-
Determination Theory explains human productivity, motivation,
and well-being from an organismic systems perspective (Ryan,
42. 1995; Ryan et al., 1997). As such, it directly applies to human
service programs that target productivity and well-being out-
comes. According to the theory, human systems function
optimally
– mentally, physically, and socially – when two conditions
exist:
first, each system part experiences itself as uniquely
contributing
to the survival of the system that nurtures it, and second, the
system part is able to regulate its contribution in relation to its
own
needs. This second condition, more concretely stated, requires
that
a person contributing to a greater system has the autonomy to
choose to discontinue his/her contribution in order to eat, rest,
regenerate, and otherwise protect her/his own survival.
As sub-systems of nurturing super-systems, human beings have
intrinsic motivation to receive nurture from the larger system,
to
contribute to the survival of that system (which in turn
contributes
to their own survival), to reassure themselves by acknowledging
their own unique contributions to that survival, and to protect
themselves from being subsumed by the system. Each of these
intrinsic motivations is translated into three basic psychological
needs: for relatedness (receiving and giving nurture);
competence
(contribution to system survival); and autonomy (regulation of
the
contribution) (Ryan & Deci, 2000a).
SDT defines five types of motivation, each gauged by the
degree
to which behavior is related to satisfying these basic
psychological
43. needs (Deci & Ryan, 2000). The term, intrinsic motivation,
refers to
the stimulus that causes actions that directly satisfy these needs.
The remaining four (integrated, identified, introjected and
extrinsic) motivations, all of which begin externally, divide into
two types, internalized and external, distinguished by their
affect
on Basic Psychological Need Satisfaction. Internalized
motivations
satisfy basic psychological needs; external motivations oppose
them. Finally SDT uses the term amotivation to describe lack of
motivation altogether.
Using SDT as a foundational theory to explain program models
from a systems orientation is based on the well supported SDT
hypothesis (Deci & Ryan, 2002) that outcomes associated with
internalized motivations are more predictive of longer term
well-
being than outcomes associated with external motivations and
threatened need satisfaction. Internalized motivations are
accom-
panied by Basic Psychological Need Satisfaction, measured as
need
satisfaction in relation to authority (Koestner & Losier, 1996).
Therefore, Basic Psychological Need Satisfaction is the
measurable
construct by which program evaluators can apply SDT to
measuring program outcomes, their value, and how they are
affected by context and feedback. Each subdomain of Basic
Psychological Need Satisfaction is defined with examples in
Table 3.
Evidence supporting the hypothesis that predicts longer term
well-being results from outcomes associated with Basic Psycho-
logical Need Satisfaction has been demonstrated in SDT
research.
44. Specifically, researchers have established at least association
and,
in some instances, a causal relationship between Basic
Psycholo-
gical Need Satisfaction and health care compliance, mental
health,
academic success, goal achievement, and pro-social activity
(Deci
& Ryan, 2000; Deci et al., 2001a; Deci & Vansteenkiste, 2004;
Grolnick & Slowiaczek, 1994; Reis, Sheldon, Gable, Roscoe, &
Ryan,
2000; Ryan & Deci, 2000a; Sheldon & Houser-Marko, 2001;
Wiest,
Wong, Cervantes, Craik, & Kreil, 2001; Williams, McGregor,
Zeldman, Freedman, & Deci, 2004). These associations have
been
demonstrated across child, adolescent, college student,
workplace,
and elderly populations (Baard, Deci, & Ryan, 2004; Deci,
Ryan, &
Koestner, 2001b; Grolnick & Slowiaczek, 1994; Kasser & Ryan,
1999; Reis et al., 2000; Veronneau, Koestner, & Abela, 2005;
Wiest
et al., 2001). Although the actions, behaviors, or processes by
which
need satisfaction occurs changes across cultures, developmental
age, and even individuals, throughout these groups, need
satisfaction itself has been found to be consistently measurable
and relevant to longer term well-being. (Deci et al., 2001a;
Ryan
et al., 1999; Ryan, La Guardia, Solky-Butzel, Chirkov, & Kim,
2005;
Schmuck, Kasser, & Ryan, 2000; Vansteenkiste, Zhou, Lens, &
Soenens, 2005). SDT researchers have also found that, in
contrast to
internalized motivators, reward and punishment (both being
45. external motivators associated with lower Basic Psychological
Need Satisfaction) have been shown to diminish performance
(Baker, 2004; Deci, Connell, & Ryan, 1989; Deci, Koestner, &
Ryan,
1999; Edward et al., 2002; Ryan et al., 1995; Sheldon, Ryan,
Deci, &
Kasser, 2004; Vansteenkiste, Simons, Lens, Sheldon, & Deci,
2004).
The three psychological needs are considered ‘‘basic’’ because
they emanate from the very survival of an organismic sub-
system
within its super-system. This ‘‘basic’’ nature of Basic
Psychological
Table 3
Basic Psychological Need Satisfaction (BPNS) definitions and
examples.
Basic Psychological Need Satisfaction Definitions (Deci &
Ryan, 2000) Examples of questionnaire items
Sense of competence: the self-perception of being engaged in
optimal challenges and experiencing the
ability to effectively affect both physical and social worlds
I feel very capable and effective.
I seldom feel inadequate or incompetent.
Sense of relatedness: the perception that one is both loving and
caring for I feel loved and cared about.
46. others while being loved by and cared for by others in a social
system. I seldom feel a lot of distance in my relationships.
Sense of autonomy: the perception of having organized one’s
own experience and behavior, and this
self-organized activity maintains an integrated sense of self
while serving to enhance the
satisfaction of the other two needs.*
I feel free to be who I am. I seldom feel controlled
and pressured to be certain ways.
*This second facet of the definition distinguishes sense of
autonomy from independence, individualism,
detachment, selfishness, or internal locus of control. Sense of
autonomy involves internal regulatory
schemas consistent with a sense of an integrated, joyful self
rather than extrinsic regulatory schemas
associated with experience of tension, and ambivalence due to
extrinsic pressures
(Ryan and Deci, 2000a). SDT researchers have distinguished
integrated from non-integrated
choice making by the terms reflective autonomy for the former
and reactive autonomy for the latter
(Koestner and Losier, 1996)
People experiencing sense of reflective (versus reactive)
47. autonomy will experience these feelings even in the
presence of authority figures such as teachers, parents,
popular peers, employers, police and corrections
officers, etc. (Koestner & Losier, 1996)
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–80 73
Need Satisfaction makes the Basic Psychological Need
Satisfaction
construct a powerful tool for evaluating human service
programs
that seek to enhance the functioning and well-being of human
systems. Measuring outcomes without considering their effect
on
Basic Psychological Need Satisfaction contains the risk of
counting
as ‘‘successful’’ outcomes that may be either short lived, or in
the
long-term, detrimental to the program participant’s capacity for
productively contributing to the system generating the
outcomes.
Basic Psychological Need Satisfaction – the combined sense of
competence, relatedness, and autonomy – measured in relation
to
a given outcome indicates the degree to which the outcome
benefits the individual producing it. Thus Basic Psychological
Need
Satisfaction can be used to assess the ‘‘value’’ of an outcome
while
simultaneously respecting the very unique value systems of
individual program participants.
48. Basic Psychological Need Satisfaction is also highly sensitive
to
the conditions of a given moment (Ryan, 1995). For example, an
individual’s experience of need satisfaction may be high in the
presence of one parent and low with another—or high in
relation to
participating in a program and low at home. Thus, measuring
need
satisfaction can reveal differing effects of given environments
on
unique individuals. Because it is so sensitive to varying
conditions
and perspectives, the Basic Psychological Need Satisfaction
construct can be used to measure some of the relationships in a
foundational theory-based program model. For instance,
consider-
ing pulse point #1, an outcome that enhances the well-being of a
participant should improve participants’ overall Basic Psycholo-
gical Need Satisfaction, or at least not diminish it. Likewise,
programming that diminishes Basic Psychological Need
Satisfac-
tion will, according to Self-Determination theory (Edward et al.,
2002), be less productive than programming that enhances it
(pulse point #2). Similarly, a provider’s activities accompanied
by
low Basic Psychological Need Satisfaction will be less effective
than
activities provided with enhanced need satisfaction (pulse point
#6).
Another important SDT construct for evaluating programs from
a whole-system perspective is that of Support for Autonomy.
SDT
research has established across age groups and cultures, the
relationship of autonomy support to Basic Psychological Need
Satisfaction (Gagne, 2003; Gagne, Ryan, & Bargmann, 2003;
49. Grolnick & Ryan, 1989; Grolnick, Ryan, & Deci, 1991; Wiest,
Wong, & Kreil, 1998; Williams & Deci, 1996; Williams, Rodin,
Ryan,
Grolnick, & Deci, 1998; Wong, Wiest, & Cusick, 2002). Thus
evidence of Support for Autonomy throughout the system—from
provider and families to the participant (pulse points #3 and #4)
and from program administration to provider (pulse point #7)
provides important contextual information, particularly useful
for
quality improvement. Program outcomes and participant well-
being will be enhanced as participants experience autonomy
support. Support for Autonomy is measured with a six-item
climate questionnaire (Deci & Ryan, 2008). Items can be
tailored to
any supporting environment and involve Likert scale response
to
statements such as ‘‘my counselor encourages my questions,’’
‘‘my
counselor conveys confidence in my ability to succeed,’’ and
‘‘my
counselor asks how I see things before suggesting new ways to
do
things.’’ Measuring Support for Autonomy reveals information
about how well providers are influencing the reflective
autonomy
and consequent internalized motivation of the people they serve,
or for instance, how families communicate to the participant
their
response to program results.
To date, Self-Determination Theory research has provided no
specific tool for measuring the motivational effect of evaluation
feedback (pulse points #5 and #8). However, it may be that
either
or both need satisfaction measures and/or autonomy support
measures may be adaptable for validly and reliably measuring
50. the
effect. For example, to measure feedback effect, need
satisfaction
items could be phrased, ‘‘When I receive my performance
review. I
feel admired and cared about. I feel capable and effective. I
(seldom) feel controlled and pressured to be certain ways, etc.’’
Likewise, the autonomy support measure could be modified to
read, ‘‘The evaluation process encourages me to ask questions’’
shows that the evaluators understand me and what I want to
accomplish, acknowledges how I see things before suggesting
new
ways of doing them, etc. In this way, an SDT-based logic model
builds into a program evaluation, the long-standing concern of
evaluators for evaluation anxiety (Donaldson et al., 2002) and a
way to measure its presence and effect.
5. Using Self-Determination Theory as a foundational theory
for evaluation design
As discussed above and shown in Fig. 1, a foundational theory-
based human service program evaluation includes – in addition
to
the input, activity, output, and outcome elements found in the
change model and the ‘‘nuts and bolts’’ of the action model
(Chen,
2004) – measurement of any of the eight pulse points that help
describe varying perspectives of interactions between the
provider
and target systems. As a foundational theory, Self-
Determination
Theory will prescribe how to measure these pulse points. In
order
to determine the productivity of the provider–target program
system as it is affected by its surrounding systems, an SDT-
based
51. logic model utilizes the SDT-based constructs of Basic
Psycholo-
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–8074
gical Need Satisfaction and Support for Autonomy (Program,
Organization, and Family/Community Climate) to operationalize
the eight inter- and intra-system relationships.
Similarly, an evaluator might utilize other systems approaches
to analyze these relationships. It is possible that a different
foundational theory may even reveal additional or different
pulse
points. Although exploring those other theories is beyond the
scope of this paper, essential to the thesis of this paper is that
whatever foundational theory or systems approach is used, it
must
both explain and provide a way to measure the functionality of
the
relationships that affect the productivity of the human service
program. It must also explain and make measurable how varied
understanding of these relationships as seen from varied
perspectives affects system productivity.
The following examples from two diverse SDT-based program
evaluations conducted by The Ohio State University Center for
Family Research illustrate how evaluators can utilize a single
foundational theory to produce widely varying evaluation
designs.
Each example is accompanied by a graphic model that
illustrates
its change model (shaded boxes), action model (white boxes),
and
the pulse point relationships that define the models (small
52. squares). Each example is also explained with a table that
delineates the program’s underlying theories and the program
model’s pulse points. Each pulse point is further described with
a
description of the specific evaluation question(s) it addresses,
the
relationship and operative perspectives involved, the
measurable
indicators of those perspectives and relationships, and its
related
area of evaluation inquiry.
5.1. Example #1: longitudinal evaluation of a comprehensive
out-of-school program
The broadest use of an SDT-based logic model to date has been
for a longitudinal evaluation of the Scotts Miracle-Gro Cap
Scholars
out-of-school program. The evaluation addressed questions
related to six of the eight pulse points, all but the two
concerning
evaluation feedback. The program’s causative theory and
change
model (shown in the shaded boxes in Fig. 2), was that ongoing
comprehensive academic and social support from the staff, and
incentive from the funder in the form of a paid college
education
would lead to academic success and career readiness. The
evaluation therefore addressed outcomes related to four specific
Fig. 2. SDT-based program model with six pulse points fo
objectives (academic achievement, career focus, self-efficacy,
social responsibility). The action model (shown with white
boxes
and arrows), is described below with the use of the six pulse
point
53. relationships that comprise it (Table 4).
In this SDT-based model, pulse point #1, the relationship of the
participant to the outcomes, addressed the question, did the four
outcomes contribute to the students’ overall well-being? The
answer was operationalized by nesting program outcomes within
students’ overall Basic Psychological Need Satisfaction. In so
doing,
for each measured outcome (academic achievement, self-
efficacy,
etc.), the data could be organized into a four-square pattern of
outcome results (Table 5): on the y axis is the expected outcome
achieved or not achieved and on the x axis, positive or negative
effect on overall Basic Psychological Need Satisfaction (as
measured in relation to authority).
Following SDT, outcomes achieved with maintained or
improved need satisfaction were considered to be true
successes;
cases of lack of outcome with diminished Basic Psychological
Need
Satisfaction were true failures. The opposite diagonal of the
four-
square yielded deeper insight than afforded by more traditional
evaluations. Outcomes achieved with diminished Basic
Psycholo-
gical Need Satisfaction were considered to be either short lived
or
predictive of additional unintended and probably negative
consequences. Improved Basic Psychological Need Satisfaction
in
the presence of outcomes not achieved indicated that measured
outcomes were failing to capture the full range of benefits
achieved
by the program, especially if need satisfaction in relation to the
program was also high. Within this category of results might
54. perhaps be the student with whom program staff had been
working to establish enough academic footing, confidence, and
interest to engage academically the following year. To label this
student a failure because of low GPA or homework motivation
after
the first year would have robbed both the student of necessary
preparation time and the teacher of being able to individually
address student needs. On the other hand, improved overall need
satisfaction would reflect both concurrent positive behaviors in
a
realm outside of school and predict future positive behaviors in
school.
The second pulse point, the relationship of the participant to
program activities, was operationalized with the shortest term
outcomes of the change (activities to outcomes) model. For this
evaluation, the program outputs and short term outcomes
r evaluating a comprehensive after-school program.
Table 4
Out-of-school program evaluation: questions addressed by
causative theory, normative theory and foundational theory.
Theoretical support Pulse pointa Evaluation question addressed
Perspective Measurement Findings enhanced
by foundational
theory
Causative theory (e.g., ongoing
comprehensive academic and social
55. support for students and families
will lead to program engagement
that will result in academic success.)
#2 Participant
to program activities
Did the program resources and activities
demonstrate fidelity to the action and
change models?
Funder-approved-activities and outcomes
(often agreed developed by and jointly
agreed to by other stakeholders)
(Outputs) Number and type of
activities; attendance; levels
of engagement.
#2 Participant to
program activities
Did the program resources and activities
contribute to academic success?
56. Outcome achievement
(GPA, homework motivation,
critical thinking, career decision
making, social responsibility).
Normative Theory includes innovative
programming from staff and mentors,
a supportive administrative environment,
supportive program environment, parent
commitment, regular attendance, a
counselor on staff who works with both
parents and teachers; use of a
discovery-based science museum as a
supportive and inspirational program
home; etc.
#1 Participant
to Outcomes
Did program resources and activities
lead to outcomes that enhanced
57. participant well-being? (pulse point #1)
Participant Outcomes in relation to overall
need satisfaction.
#2 Participant to
program activities
Did program resources and activities
lead to participants’ internalized
motivation to achieve program outcomes?
Participant Higher score on BPNS in Relation
to Program Scale (BPNS-Program)
than concurrent Overall BPNS.
#3 Participant
to Provider
Foundational Theory Self-Determination
Theory: Internalized motivation throughout
the programming system leads to
successful production of outcomes that
beneficial to the students’ overall well-being
58. as measured by their Basic Psychological
Need Satisfaction in relation to authority
#4 Participant
to Family or
Community
To what degree did the participants’
family/community environment further
or hamper program efforts?
Participant BPNS in relation to family; BPNS
in relation to community; Family
Support for Autonomy.
#6 Provider
to activities
and outputs
Did program staff support participants’
internalized motivation to achieve
outcomes?
Participant Provider Participant perception of staff
59. Support for Autonomy; participant
BPNS in relation to program;
provider BPNS in relation
to program.
#7 Provider
to administration
How well did the administration support
staff’s optimal performance?
Provider Staff perception of autonomy
support.
a Note that these pulse points are used in addition to more
typically monitored process and outcome measures.
D
.L
.
W
a
sse
rm
a
n
/
61. 0
1
0
)
6
7
–
8
0
7
5
Table 5
Four outcome groups.
Overall Basic Psychological Need Satisfaction
High or improved Low or diminished
Outcome achievement
Achievement Success Questionable success
No achievement Possible success No success
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–8076
included attendance and engagement. But in addition, those
short
62. term outcomes were augmented with the understanding that the
Basic Psychological Need Satisfaction of a participant in
relation to
the program reflected how much the program could be
contribut-
ing to that person’s overall well-being. Consider for instance, a
highly successful, high need satisfaction performer in the out-
of-
school program who reported low sense of relatedness, compe-
tence, and autonomy in relation to the program. Based on SDT
and
the research supporting it, the chance that the program had
contributed to this student’s success was very low.
Both of these first two pulse points tempered outcome
information with additional information about the quality of the
outcome and its association with the program. In this way,
outcomes irrelevant to the well-being of program participants
were documented, and outcomes occurring outside of the
program’s influence could be explored.
The third pulse point, assessment of the relationship between
the
participant and the provider was operationalized as the
participants’
perceptions of program support for their autonomy. Adminis-
trators recognized that students experiencing less autonomy
support from staff would, in the long run, receive less long-term
benefit from the program. Also, students came to recognize that
their responses to the SDT-based questionnaires gave them an
important voice, letting staff know how they were feeling about
program participation.
In addition to recognizing program Support for Autonomy as an
important element, SDT guided the evaluator to consider family
and community climate when assessing the program’s effect on
63. the family and community conditions (pulse point #4) that
would
reinforce the academic outcomes the program was working to
create. Based on information that revealed students’ perception
Fig. 3. SDT-based program model with four pulse points for
evaluating a data collectio
that parental support for their autonomy was weak, program
providers added family strengthening programming with the
intention of bolstering overall Basic Psychological Need
Satisfac-
tion. Thus, measuring student’s perspective on family climate
helped determine how well families were supporting the aims of
the program.
Provider context (pulse point #7) was operationalized as the
program staff’s perception of management’s support for their
autonomy. This measurement was based on the SDT premise
that
staff who felt supported in their work environment would in
turn
experience improved Basic Psychological Need Satisfaction in
their
work (measured with pulse point #6) and in turn would create
higher quality opportunities for program participants both
within
and outside of the prescribed program activities.
5.2. Example #2: a statewide effort to utilize data to enhance
child
mental health service coordination
The second exemplar involves a formative evaluation project
designed to help county-level service coordination agencies
build
capacity for collecting and using evaluation data to enhance
64. outcomes for families of children with mental health needs.
Having
been based on a child outcomes-based program theory, the
service
coordination effort had yet to realize its focus on families. Part
of
the intention of this project was to enhance child outcomes
through inclusion of family outcomes and greater family
engage-
ment. As with the previous exemplar, the change model is
represented in the shaded boxes of the program’s SDT-based
program model (Fig. 3) and the action model in the white boxes,
defined by the pulse points explained below.
Whereas the first exemplar illustrated use of foundational
theory to explain system effects primarily on participant out-
comes, this second exemplar demonstrates the use of
foundational
theory to explain and monitor the effects of the evaluation
itself. As
illustrated in Fig. 3, for this evaluation pulse points, #5 and
#8—the
relationships of provider and participant to evaluation
feedback—
were of paramount importance (Table 6).
For both families and providers, how providers and evaluators
managed evaluation anxiety would directly impact the success
of
the program. While use of data to make service coordination
decisions was the program objective, using that data with the
end
n and utilization program to enhance service coordination for
youth and families.
65. Table 6
Service coordination data collection and utilization: evaluation
questions addressed by causative theory, normative theory, and
foundational theory.
Theoretical support Pulse point relationship Evaluation question
addressed Perspective Indicators (quality of relationships
between
distinct system parts)
Area of evaluation inquiry
Change Theory: Use of data to track
family needs, characteristics, services
provided, changes in needs and
goals attained will lead to effective
service coordination and improved
child outcomes.
#2 Participant to
program activities
Did the county coordinators input data and utilize
the data reporting system in a way that was
consistent with generating accurate information?
66. Funder and evaluator (Outputs) Number of cases entered
consistent
with other county records. Number of
Completed planning and data reports.
Formative evaluation;
quality improvement
#2 Participant to
program activities
Did the data collection and reporting system
contribute to improved outcomes?
County Service
coordinators
Outcome achievement: number of completed
referrals for positive screens
Accountability
Normative Theory: Evaluators will supply
program data to program administrators
in a way that enhances providers’
interest in adopting the innovation.
67. In turn they supply services to families
in ways that enhance families’ sense
need satisfaction in relation to the
services they receive.
#5 Provider system
to outcome evaluation
feedback
How well equipped were family caregivers to
respond to outcome feedback with appropriate
access to services?
Family caregivers Improved satisfaction of family caregiver
wants and needs.
Quality Improvement
Valuing (by whose
values?)
#6 Provider to activities
and outputs
Did county service coordinators and other
68. personnel involved with data entry experience
the data entry system as enhancing their ability
to do their work?
County Service
coordinators and
data entry personnel
Perception of innovation adoption
(observability, complexity, compatibility)
in relation to the data entry system
Quality improvement
Do county service coordinators and other
county-level stakeholders experience the
data feedback system as enhancing their
ability to do their work?
County Service
coordinators and
other county-level
stakeholders.
69. Perception of innovation adoption
(observability, complexity, compatibility)
in relation to data collection and report
planning and feedback process.
Effect of program context
Foundational Theory: Self-Determination
Theory: internalized motivation
throughout the programming system
leads to successful production of
outcomes that are valuable throughout
both target and provider sub-systems.
#7 Provider to
administration
How well did the data planning and reporting
process support program staff’s optimal
performance?
County service
Coordinators
70. Perception of autonomy support from
data and planning process.
Effect of organizational
context and quality
improvement.
#8 Administration to
evaluation feedback
Do county service coordinators and other
providers experience the relationship with
the evaluator as enhancing their ability
to do their work?
County Service
coordinators.
Perception of autonomy support from
the evaluator.
Evaluation anxiety:
effect of evaluation
feedback.
71. Do county service coordinators utilize the
data in a way that enhances family and
child outcomes?
Evaluator Qualitative analysis of county Continuous
Quality Improvement planning reports in
relation to data reports.
D
.L
.
W
a
sse
rm
a
n
/
E
v
a
lu
a
tio
73. –
8
0
7
7
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–8078
result of dampening family involvement would be counter-
productive to engaging them. To monitor family response to
both
formal and informal evaluation of their child’s progress and
behavior (pulse point #5), families completed the Family
Caregiver
Wants and Needs Scale (Gavazzi et al., 2008) a measure of
family
caregivers’ perceptions of how well their needs were being
heard
and met by both formal and informal service resources.
Even more important to this data-enhanced service coordination
project was monitoring provider response to the evaluation
reports
and limiting the anxiety the reports produced (pulse point #6).
According to SDT, internalized motivation to utilize the
evaluation
data would be indirectly related to the anxiety generated by that
data. Thus, with an eye toward ‘‘teaching to the test’’ the
evaluation
involved providers (county service coordinators) in the semi-
annual
process of completing continuous quality improvement ‘‘Do-
74. Study-
Reflect-Plan’’ reports (Langley, Nolan, Norman, Provost, &
Nolan,
1996). These reports, constructed initially in one-on-one
conversa-
tions with the evaluator, involved four steps: (1) reviewing the
most
recent data report and/or identifying evaluation questions and
their
purpose (2) selecting instruments available in the on-line
system to
collect information to answer those questions; (3) identifying
new
operating, reporting, or data-use strategies to be implemented;
and
(4) expected results in the next report along with what they will
do
about them. The resulting data report would be as simple or
complicated as they needed/wanted and would reflect data
gathered
with only the instruments they chose.
To evaluate the relationship of provider to data collection
activities and outputs (pulse point #6), County Service
Coordina-
tors and other data entry personnel completed two versions of
the
Perception of Innovation Adoption Questionnaire (Pankratz,
Hallfors, & Cho, 2002). This survey assesses perception of an
innovation’s observable benefit, complexity, and adaptability.
Selection of the questionnaire was based on the assumption that
internalized motivation would be highest if the program was
easily
adaptable, had observable benefit and minimal complexity. The
first version related to the on-line data entry system using, for
instance, the wording, ‘‘The online data collection system fits
75. well
with the way I work’’; the second version asked about the data
planning, reporting, and support system, worded, ‘‘Using the
Do-
Study-Reflect-Plan process will increase the quality of how we
serve children and families in our county.’’
After each reporting cycle, to gauge how well the system was
supporting provider autonomy (i.e., the relationship of the
provider to the program administration; pulse point #7), County
coordinators completed autonomy support questionnaires related
to the Do-Study-Reflect-Plan process (e.g., ‘‘The Do-Study-
Reflect-
Plan process encourages me to ask questions that will help to
enhance service coordination in my county.’’). In addition, the
evaluator conducted qualitative analysis of the planning reports
as
related to the data reports and feedback (pulse point #8) to
determine system alterations that would further county use of
the
system and ultimately family engagement.
In this exemplar, use of a Foundational Theory coordinated into
a single model the multi-layered units of analysis: service
coordinators, families, and children. Whereas the causative
model
that had focused resources on child outcomes only had discour-
aged meaningful data collection and use, the systems orientation
and foundational theory-based model (in this case, SDT)
encour-
aged data collection while reengaging service coordinators’
focus
on families in addition to improving child outcomes.
6. Lessons learned: contributions of foundational
theory-based models
76. This paper has introduced the notion that by adopting a systems
orientation and foundational theory, evaluators can (1) system-
atically define eight operative ‘‘pulse points’’ of a program’s
action
and change models using relationships and perspectives within
and between provider and target systems; (2) identify social
science theory as a foundational theory that explains how to
determine the ‘‘health’’ of these pulse points relative to how
they
contribute to the change model and the outcomes; and with both
systems thinking and foundational theory to support it,(3)
significantly enhance the design and conclusions of their human
service program evaluations. The paper has also introduced
Self-
Determination Theory as an example of a useful and informative
foundational theory.
Two exemplars have shown that depending on the topic of
inquiry and the related questions, each evaluation design will
involve different combinations of pulse points. Few evaluation
designs will incorporate all eight. To use the framework,
evaluators
will first identify the relevant provider and target system
distinctions, and then the operative relationships and
perspectives
within and between them. Next they use the pulse points to
decide
which evaluation questions they want to answer, and then
construct the evaluation logic accordingly.
None of the questions related to the system pulse points in any
of these examples are new to evaluation practice. Evaluators
have
long been concerned with the evaluation issues of valuing by
whose values, evaluation anxiety, and the effect of context on
77. both
providers and participants. However in practice they have
primarily designed causative theory-based program models with
no formal theory to explain or systematic way to define the
action
model and its effect on the change model. Thus attention to
these
contextual and feedback issues often have been considered
incidental to determining if human service programs achieved
their projected outcomes. Yet both program providers and
evaluators have known that these additional ‘‘peripheral’’
context
and feedback issues have profound impact on the quality of not
only the programming, but the outcomes as well. A systems-
oriented program model with foundational theory to support it
moves these heretofore sidelined areas of investigation into a
more
central and purposeful focus.
Although the two examples of program evaluations designed
using this framework have shown the breadth and flexibility of
a
single foundational theory, there is much work to be done to
explore uses of a foundational theory across the vast array of
human service programs, each with unique designs, stakeholder
concerns, and evaluation questions to be addressed.
As for the use of Self-Determination Theory as a foundational
theory, although it is based on empirical findings from across
multiple disciplines, the SDT-based evaluation framework
remains theoretical until it is empirically tested in the context
of evaluation practice. That process will be multi-faceted and
include many steps that test both instruments and methodology.
Although most of the measurement instruments have been tested
for reliability and validity in the context of SDT research
(which is
78. continually expanding in the literature by researchers world-
wide), their utility for program evaluation per se has not been
confirmed. The use of the BPNS-authority scale needs to be
validated as an overall BPNS measure and one that, in a
program
evaluation setting, discriminates between reflective and reactive
autonomy. Also, as noted in section III, no instrument has been
designed yet to measure the effect of evaluation feedback on
Basic
Psychological Need Satisfaction. In addition to the instruments,
the relationships between participant and provider Basic Psy-
chological Need Satisfaction and program outcomes needs to be
confirmed in the context of a full range of program evaluations.
Testing also needs to confirm that the theoretical assumptions
contained within the framework transfer to the wide range of
human service programs by and for a wide range of cultures.
Despite these potential limitations, evaluators can utilize the
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
67–80 79
concepts and measurement tools to answer important evaluation
questions.
The advantage of Self-Determination Theory as a foundational
theory is that it provides a way to interpret disparate behaviors,
attitudes and contextual supports in a way that can be
standardized to individual human well-being. Other
foundational
theories will no doubt be found to have equally useful and
possibly
different advantages. Future theorists and practitioners may be
able to explore other broad-based motivational or productivity
theories such as exchange theory, structural functionalism, net-
work theory, ecological theory or any of the many systems
79. theories
as equally as useful foundational theories.
Whatever theory is used to support it, this systems-oriented
approach to designing human service program evaluations, is
offered as one more tool for stimulating continual dialogue
around
difficult evaluation questions: how do evaluators systematically
account for the contextual factors that affect how a human
service
program’s merit, value or worth is assessed? In what ways can
evaluators discourage negative impacts of the evaluation
process—
on either program providers or participants? What strategies
exist
for maximizing the program-improvement benefits of outcome
evaluation? By understanding human service program outcomes
as resulting from various relationships within and between
systems, evaluators have the opportunity to shed new light on
age-old questions.
Acknowledgements
Earlier versions of this paper were presented at the joint
Canadian Evaluation Society/American Evaluation Association
conference in Toronto, Ontario October, 2005 and at the
American
Evaluation Association conference in Portland, Oregon,
November,
2006.
The author first acknowledges the Ohio State University Center
for Family Research which has adopted Foundational Theory-
Driven Evaluations as the basis for its evaluative work and has
provided each of the exemplars used in this paper. Acknowl-
edgements also go to the Nationwide Children’s Research
80. Institute
Center for Innovation in Pediatric Practice. Also to Kathi Pajer
and
the Nationwide Children’s Research Institute writer’s group,
and
Stephen M. Gavazzi, William Meezan, Robin Miller, Jonny
Morell,
Marilyn McKinley, Amy Hoch, and David Yorka for valuable
editorial feedback.
References
Altschuld, J. W., & Kumar, D. (1995). Program evaluation in
science education: The
model perspective. New Directions for Evaluation, 65, 5.
Baard, P. P., Deci, E. L., & Ryan, R. M. (2004). Intrinsic need
satisfaction: A motivational
basis of performance and well-being in two work settings.
Journal of Applied Social
Psychology, 34(10), 2045.
Baker, S. R. (2004). Intrinsic, extrinsic, and amotivational
orientations: Their role in
university adjustment, stress, well-being, and subsequent
academic performance.
Current Psychology: Developmental, Learning, Personality,
Social, 23(3), 189.
Cabrera, D., Colosi, L., & Lobdell, C. (2008). Systems thinking.
Evaluation and Program
Planning, 3: 317–321.
Chen, H. T. (1990). Theory-driven evaluations. Newbury Park,
CA: Sage.
Chen, H. T. (2004). Practical program evaluation: Assessing and
81. improving planning,
implementation, and effectiveness. Thousand Oaks, CA: Sage.
Cronbach, L. J. (1982). Designing evaluations of educational
and social programs/Lee J.
Cronbach with the assistance of Karen Shapiro. San Francisco:
Jossey-Bass.
Davies, R. (2004). Scale, complexity and the representation of
theories of change.
Special Issue: European Evaluation Society Conference, 10(1),
101–121.
Deci, E. L., Connell, J. P., & Ryan, R. M. (1989). Self-
determination in a work organization.
Journal of Applied Psychology, 74(4), 580.
Deci, E. L., Koestner, R., & Ryan, R. M. (1999). The
undermining effect is a reality after
all—Extrinsic rewards, task interest, and self-determination:
Reply to Eisenberger,
Pierce, and Cameron (1999) and Lepper, Henderlong, and
Gingras (1999). Psycho-
logical Bulletin, 125(6), 692.
Deci, E. L., & Ryan, R. M. (2000). The ‘‘what’’ and ‘‘why’’ of
goal pursuits: Human needs
and the self-determination of behavior. Psychological Inquiry,
11(4), 227.
Deci, E. L., & Ryan, R. M. (2002). Overview of self
determination theory: An orga-
nismic dialectical perspective. In E. L. Deci & R. M. Ryan
(Eds.), Handbook
of self-determination research (pp. 3–31). Rochester, NY:
University of Rochester
82. Press .
Deci, E.L., & Ryan, R.M. (2008, Oct 27, 2006). Self
Determination Theory: An approach to
human motivation and personality/questionnaires/perceived
autonomy support:
The Climate questionnaires from
<http://www.psych.rochester.edu/SDT/
measures/auton.html>. Retrieved 4.01.08.
Deci, E. L., Ryan, R. M., Gagne, M., Leone, D. R., Usunov, J.,
& Kornazheva, B. P. (2001).
Need satisfaction, motivation, and well-being in the work
organizations of a former
Eastern bloc country: A cross-cultural study of self-
determination. Personality &
Social Psychology Bulletin, 27(8), 930.
Deci, E. L., Ryan, R. M., & Koestner, R. (2001). The pervasive
negative effects of rewards
on intrinsic motivation: Response to Cameron (2001). Review
of Educational
Research, 71(1), 43.
Deci, E. L., & Vansteenkiste, M. (2004). Self-determination
theory and basic need
satisfaction: Understanding human development in positive
psychology. Ricerche
di Psicologia, 27(1), 23.
Donaldson, S. I. (2007). Program theory-driven evaluation
science. New York: Lawrence
Erlbaum Associates.
Donaldson, S. I., Gooler, L. E., & Scriven, M. (2002).
Strategies for managing evaluation
83. anxiety: Toward a psychology of program evaluation. American
Journal of Evalua-
tion, 23(3), 261.
Gagne, M. (2003). The role of autonomy support and autonomy
orientation in prosocial
behavior engagement. Motivation & Emotion, 27(3), 199.
Gagne, M., Ryan, R. M., & Bargmann, K. (2003). Autonomy
support and need satisfaction
in the motivation and well-being of gymnasts. Journal of
Applied Sport Psychology,
15(4), 372.
Gasper, D. (2000). Evaluating the ‘logical framework approach’
towards learning-
oriented development evaluation. Public Administration and
Development, 20(1),
17–28.
Gavazzi, S.M., Scheer, S.D., Kwon, I.I., Lammers, A., Fristad,
M.A., & Uppal, R. (2008).
Measuring caregiver wants and needs in families of youth with
behavioral health
concerns. Unpublished manuscript.
Grolnick, W. S., & Ryan, R. M. (1989). Parent styles associated
with children’s
self-regulation and competence in school. Journal of
Educational Psychology,
81(2), 143.
Grolnick, W. S., Ryan, R. M., & Deci, E. L. (1991). Inner
resources for school achievement:
Motivational mediators of children’s perceptions of their
parents. Journal of
84. Educational Psychology, 83(4), 508.
Grolnick, W. S., & Slowiaczek, M. L. (1994). Parents’
involvement in children’s school-
ing: A multidimensional conceptualization and motivational
model. Child Devel-
opment, 65(1), 237.
Hatry, H., Van Houten, T., Plantz, M. C., & Greenway, M. T.
(1996). Measuring program
outcomes: A practical approach. Alexandria, VA: United Way
of America.
Henggeler, S. W. (1999). Multisystemic therapy: An overview
of clinical procedures,
outcomes, and policy implications. Child Psychology and
Psychiatry Review, 4(01),
2–10.
Innovation Network Inc. (2009). Logic Model Workbook.
<http://www.innonet.org/
client_docs/File/logic_model_workbook.pdf>. Retrieved
30.01.09.
Kaplan, S. A., & Garrett, K. E. (2005). The use of logic models
by community-based
initiatives. Evaluation and Program Planning, 28(2), 167–172.
Kasser, V. G., & Ryan, R. M. (1999). The relation of
psychological needs for autonomy
and relatedness to vitality, well-being, and mortality in a
nursing home. Journal of
Applied Social Psychology, 29(5), 935.
Knowlton, L. W., & Phillip, C. C. (2009). The logic model
guidebook: Better strategies for
85. great results. Thousand Oaks, CA: SAGE Publications Ltd.
Koestner, R., & Losier, G. F. (1996). Distinguishing reactive
versus reflective autonomy.
Journal of Personality, 64(2), 465.
Langley, G. J., Nolan, K. M., Norman, C. L., Provost, L. P., &
Nolan, T. W. (1996). The
improvement guide: A practical approach to enhancing
organizational performance.
New York, NY: Jossey-Bass.
Mark, M. M., & Henry, G. T. (2004). The mechanisms and
outcomes of evaluation
influence. Evaluation: The International Journal of Theory,
Research and Practice,
10(1), 35.
McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: A
tool for telling your program’s
performance story. Evaluation and Program Planning, 22(1),
65–72.
Pankratz, M., Hallfors, D., & Cho, H. (2002). Measuring
perceptions of innovation
adoption: The diffusion of a federal drug prevention policy.
Health Education
Research, 17(3), 315.
Reis, H. T., Sheldon, K. M., Gable, S. L., Roscoe, J., & Ryan,
R. M. (2000). Daily well-being:
The role of autonomy, competence, and relatedness. Personality
& Social Psychology
Bulletin, 26(4), 419.
Rogers, P. J. (2000). Causal models in program theory
86. evaluation. New Directions for
Evaluation, 2000(87), 47–55.
Ryan, R. M. (1995). Psychological needs and the facilitation of
integrative processes.
Special Issue: Levels and domains in personality, 63(3), 397–
427.
Ryan, R. M., Chirkov, V. I., Little, T. D., Sheldon, K. M.,
Timoshina, E., & Deci, E. L. (1999).
The American dream in Russia: Extrinsic aspirations and well-
being in two
cultures. Personality & Social Psychology Bulletin, 25(12),
1509.
Ryan, R. M., & Deci, E. L. (2000a). The darker and brighter
sides of human existence:
Basic psychological needs as a unifying concept. Psychological
Inquiry, 11(4),
319.
Ryan, R. M., & Deci, E. L. (2000b). Self-determination theory
and the facilitation of
intrinsic motivation, social development, and well-being.
American Psychologist,
55(1), 68.
http://www.psych.rochester.edu/SDT/measures/auton.html
http://www.psych.rochester.edu/SDT/measures/auton.html
http://www.innonet.org/client_docs/File/logic_model_workbook
.pdf
http://www.innonet.org/client_docs/File/logic_model_workbook
.pdf
D.L. Wasserman / Evaluation and Program Planning 33 (2010)
87. 67–8080
Ryan, R. M., Deci, E. L., & Grolnick, W. S. (1995). Autonomy,
relatedness, and the self:
Their relation to development and psychopathology. In
Anonymous (Ed.), Devel-
opmental psychopathology, vol. 1: Theory and methods. Oxford,
England: John Wiley
& Sons. p. 618.
Ryan, R. M., Kuhl, J., & Deci, E. L. (1997). Nature and
autonomy: An organizational view
of social and neurobiological aspects of self-regulation in
behavior and develop-
ment. Development & Psychopathology, 9(4), 701.
Ryan, R. M., La Guardia, J. G., Solky-Butzel, J., Chirkov, V., &
Kim, Y. (2005). On the
interpersonal regulation of emotions: Emotional reliance across
gender, relation-
ships, and cultures. Personal Relationships, 12(1), 145.
Savaya, R., & Waysman, M. (2005). The logic model: A tool for
incorporating theory in
development and evaluation of programs (0364-3107).
Schalock, R. L., & Bonham, G. S. (2003). Measuring outcomes
and managing for results.
Evaluation and Program Planning, 26(3), 229–235.
Schmuck, P., Kasser, T., & Ryan, R. M. (2000). Intrinsic and
extrinsic goals: Their
structure and relationship to well-being in German and U.S.
college students.
Social Indicators Research, 50(2), 225.
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury
88. Park, CA: Sage Publications
Inc.
Scriven, M. (1999). The fine line between evaluation and
explanation. Research on Social
Work Practice, 9(4), 521.
Sheldon, K. M., & Houser-Marko, L. (2001). Self-concordance,
goal attainment, and the
pursuit of happiness: Can there be an upward spiral? Journal of
Personality and
Social Psychology, 80(1), 152.
Sheldon, K. M., Ryan, R. M., Deci, E. L., & Kasser, T. (2004).
The independent effects of
goal contents and motives on well-being: It’s both what you
pursue and why you
pursue it. Personality & Social Psychology Bulletin, 30(4), 475.
Stame, N. (2004). Theory-based evaluation and types of
complexity. Special Issue:
European Evaluation Society Conference, 10(1), 58–76.
Vansteenkiste, M., Simons, J., Lens, W., Sheldon, K. M., &
Deci, E. L. (2004). Motivating
learning, performance, and persistence: The synergistic effects
of intrinsic goal
contents and autonomy-supportive contexts. Journal of
Personality & Social Psy-
chology, 87(2), 246.
Vansteenkiste, M., Zhou, M., Lens, W., & Soenens, B. (2005).
Experiences of autonomy
and control among chinese learners: Vitalizing or immobilizing?
Journal of Educa-
tional Psychology, 97(3), 468.
89. Veronneau, M. H., Koestner, R. F., & Abela, J. R. Z. (2005).
Intrinsic need satisfaction and
well-being in children and adolescents: An application of the
self-determination
theory. Journal of Social & Clinical Psychology, 24(2), 280.
WK Kellogg Foundation. (2004). Using logic models to bring
together planning,
evaluation, and action: Logic model development guide.
<http://www.wkkf.org/
Pubs/Tools/Evaluation/Pub3669.pdf>.
Wandersman, A., Imm, P., Chinman, M., & Kaftarian, S.
(2000). Getting to outcomes: A
results-based approach to accountability. Evaluation and
Program Planning, 23(3),
389.
Wasserman, D. L. (2008). A Response to paper ‘‘Systems
Thinking’’ by D. Cabrera et al.:
Next steps, a human service program system exemplar.
Evaluation and Program
Planning, 31(3), 327–329.
Wiest, D. J., Wong, E. H., Cervantes, J. M., Craik, L., & Kreil,
D. A. (2001). Intrinsic
motivation among regular, special, and alternative education
high school students.
Adolescence, 36(141), 111.
Wiest, D. J., Wong, E. H., & Kreil, D. A. (1998). Predictors of
global self-worth and
academic performance among regular education, learning
disabled, and continua-
tion high school students. Adolescence, 33(131), 601.
90. Williams, B., & Imam, I. (Eds.). (2007). Systems concepts in
evaluation: An expert
anthology. Point Reyes, CA: EdgePress/American Evaluation
Association.
Williams, G. C., & Deci, E. L. (1996). Internalization of
biopsychosocial values by medical
students: A test of self-determination theory. Journal of
Personality and Social
Psychology, 70(4), 767.
Williams, G. C., McGregor, H. A., Zeldman, A., Freedman, Z.
R., & Deci, E. L. (2004).
Testing a self-determination theory process model for
promoting glycemic control
through diabetes self-management. Health Psychology, 23(1),
58.
Williams, G. C., Rodin, G. C., Ryan, R. M., Grolnick, W. S., &
Deci, E. L. (1998). Autonomous
regulation and long-term medication adherence in adult
outpatients. Health
Psychology: Official Journal of the Division of Health
Psychology American Psycholo-
gical Association, 17(3), 269.
Wong, E. H., Wiest, D. J., & Cusick, L. B. (2002). Perceptions
of autonomy support, parent
attachment, competence and self-worth as predictors of
motivational orientation
and academic achievement: An examination of sixth-and-ninth-
grade regular
education students. Adolescence, 37(146), 255.
Deborah Wasserman is the evaluation and research specialist at
91. the Center for Family
Research at The Ohio State University and the President of PER
Solution
s: Program
Evaluation and Research