internship ppt on smartinternz platform as salesforce developer
Chapter7
1. Chapter 7:
Measuring and Monitoring Program Outcomes:
The “bottom line”
Rossi, P.H., Lipsey, M.W., & Freeman, H.E. (2004). Evaluation: A systematic Approach (7th edition). Thousand Oaks,
CA:
Sage Publications.
2. Program Outcomes:
● The state of the target population or the social condition
that a program is expected to have change
○ Observed characteristics of target population or
social condition - NOT of the program
○ Definition of outcome makes no direct reference to
program actions
○ Does not necessarily mean that program targets
have changed or program has caused change
Rossi, Lipsey, & Freeman, 2004, pp. 204-205
3. Program Outcomes:
Outcome Level, Outcome Change, and Net Effect
Outcome Level: the status of an
outcome at some point in time
Outcome levels alone cannot be
interpreted with any confidence as
indicators of a program’s success or
failure
Outcome Change: difference
between outcome and levels at
different point in
time
Program Effect: portion of an
outcome change that can be
uniquely attributed to a program as
opposed to the influence of
another factor
Program effect is the difference between
outcome level attained with participation
in program and that which the same
individuals would have attained had they
not participated, which must be inferred
Rossi, et al., 2004, pp. 206-208
4. FIRST STEP in developing
measures for program outcome is
identifying relevant outcomes.
An evaluator must consider:
● the perspectives of the stakeholders
● program impact theory
● prior research
● unintended outcomes
5. Consider Stakeholder Perspectives -
own understanding of what program should accomplish and what outcomes they expect it to affect
● Direct Sources: Stated objective, goals, missions of program,
funding proposals, grants and contracts for services from
outside sponsors
● Outcome description must indicate pertinent characteristic,
behavior, or condition the program is expected to change
Rossi, et al., 2004, p. 209
6. Consider Program Impact
Theory
(Chapter 5)
A program impact theory is helpful for
identifying and organizing program
outcomes because it connects
program’s activities to proximal
outcomes that are expected to lead to
distal outcomes. Essentially, it is a
series of lined relationships between
program services and the ultimate
social benefits the program is intended
to produce.
Proximal Outcomes:
● Not the most important outcomes from a social or policy
perspective
● Most immediate outcomes
Distal Outcomes:
● Difficult to measure
● Greatest practical and political importance
● Influence by many other factors
Rossi, et al., 2004, pp. 209-212
7. Consider Prior Research
● Evaluation research on similar programs
● Be aware of standard definitions and measures that
established policy significance
● Could be known problems with certain definitions or
measures
Rossi, et al., 2004, p. 212
8. Consider Unintended
Outcomes
Much emphasis is placed on
identifying and defining the outcomes
that are expected; however, there may
be unintended positive or negative
outcomes and the evaluator must
make an effort to identify any potential
unintended outcomes that could be
significant for assessing the program’s
effects on the social condition(s).
● All types of prior research can be useful when considering
unintended outcomes
● Relationships with program personnel and participants can be
helpful when considering unintended outcomes
Rossi, et al., 2004, p. 213
9. SECOND STEP is to decide how
the selected outcomes will be
measured.
An evaluator must consider:
● One-dimensional vs. Multidimensional
● Measurement Procedures and Properties
● Reliability
● Validity
● Sensitivity
● Choice of Outcome Measures
10. Consider dimensions of outcomes
One-dimensional Outcomes:
● One intended outcome
Multidimensional Outcomes:
● Various components that the
evaluator needs to take into account
● Provide for broader coverage of the
concept and allow the strengths to
compensate for the weaknesses of
another
An evaluator should consider all dimensions before determining final measures.
Rossi, et al., 2004, pp. 214-215
11. Consider Measurement
Procedures and
Properties
Data has a few basic sources:
observations, records, responses to
interviews and questionnaires, and
standardized tests.
The information from these sources
becomes a measurement when
operationalized - generated through
set of specific, systematic operations
or procedures.
Established Procedures and Instruments:
● There are established procedures and instruments in place
for many program areas
Ready-Made Measurements:
● Not necessarily suitable for certain outcomes
Evaluator Developed Measurements:
● Take time that the evaluator may not have
● There are well-established measurement development
procedures for consistency
All forms of measurement and procedures (established, ready-
made, and evaluator developed) must be checked.
Rossi, et al., 2004, pp. 217-218
12. Consider Three Measurement Properties:
Reliability, Validity, and Sensitivity
Reliability: extent to which the
measure produces same results
when used repeatedly to
measure the same thing.
Physical Characteristics - more reliable
Psychological Characteristics - less
reliable
Evaluators should check twice or have
similar questions to check for
consistency
Validity: extent to which it
measures what it is intended to
measure.
Validity is difficult to test.
Depends on if measure is accepted as
valid by stakeholders
Depends on some comparison that
shows that the measure yields results
Sensitivity: extent to which the
values on the measure change
when there is a change or
difference in the thing being
measured.
Two ways outcome measures can be
insensitive:
1) May include elements that relate
to something other than what the
program could reasonably be
expected to change
2) When largely developed for
diagnostic purposes
13. Choice of Outcome Measure
An evaluator must take his/her time in selecting
measurement. A poorly chosen or conceived measurement
can undermine the worth of an impact assessment by
producing misleading estimates.
Rossi, et al., 2004, p. 222
14. THIRD STEP is to monitor
program outcomes.
Similar to program monitoring (Chapter 6), but outcome
monitoring continually collects and reviews information
relating to program outcomes.
15. Indicators for Outcome Monitoring
● Indicators should be as responsive as possible for program effects
● Most interpretable outcome indicators are those that involve variables that only the program
can affect and represent
● Outcome indicator that is easiest to link is client satisfaction - customer satisfaction
○ Problems with the indicator of customer satisfaction
■ Some customers are not able to recognize program benefits
■ Some customers are reluctant to appear critical and overrate outcomes of program
Rossi, et al., 2004, pp. 225 - 226
16. Benefits of Outcome Monitoring
● Provides useful and relatively inexpensive information about the program effects
● Provides information timely - could be available within months
● Generates feedback for administration of program - NOT for assessing program’s effect on
social conditions
Rossi, et al., 2004, p. 225
17. Limitations of Outcome Monitoring
● Outcome indicators can receive emphasis from - like “teaching to the test”
● Natural tendency to fudge and pad the indicator to make performance look better -
“Corruptibility of Indicators”
Rossi, et al., 2004, p. 227
18. FOURTH STEP is to interpret
outcome data.
When interpreting outcome data, an evaluator needs to
provide a suitable context and bring in information that
provides a relevant basis for comparison or explanation.
19. Evaluator considerations for interpreting outcome data:
● Information about changes in client mix, relevant demographic and economic trends and
information about program process and service utilization
● A framework that provides some standard for judging what constitutes better or worse
outcomes within the limitations of data
○ Pre and post comparisons provide useful feedback to administration but not credible
findings about program’s impact
○ “Benchmarking” (comparing outcome values with those from similar programs) are only
meaningful for evaluation purposes when all other things are equal between programs
being compared - a difficult standard to meet
Rossi, et al., 2004, pp. 228-231
20. Questions for Consideration:
1. What prior research is being utilized in your evaluation of a program? Are there any
established procedures and instruments that you can use in your evaluation? Are there
any ready-made measurements that you can utilize in your evaluation?
2. Have you ever used a ready-made evaluation in any type of evaluation? Assess the
suitability of this ready-made evaluation.
3. Assess the sensitivity of an evaluation you have been a part of in your past. Have you
ever used a measurement for group or program evaluation that was largely developed
for diagnostic purposes? What were the implications of using a diagnostic tool for group
or program evaluation?
4. In your experience with evaluations, have you ever experienced the pitfalls associated
with outcome monitoring? Explain why the pitfalls occurred or did not occur.