Evaluation Research and Policy Analysis
Chapter 11
*
Introduction
Evaluation research: refers to a research purpose rather than a specific method; seeks to evaluate the impact of interventions; if some result was produced
Problem analysis: designed to help public officials choose from alternative future actions
Evidence-based policy: actions of justice agencies are linked to evidence used for planning and evaluation
Evidence generation: nonprofit organizations that document and evaluate programing to create evidence that can be shared with others
Appropriate Topics for Evaluation and Problem AnalysisEvaluation research is appropriate whenever some policy intervention occurs or is plannedA policy intervention is action taken for the purpose of producing some intended resultProblem analysis focuses on deciding what intervention should be pursued
Future oriented
Linking the Process to Evaluation
Are policies being implemented as planned?
Are policies achieving their intended goals?
Evaluation seeks to link intended actions and goals of policy to empirical evidence that:
Impact assessment: examines whether policies are having the desired effects
Process evaluation: examines whether policies are being carried out as planned
Often conducted together
Getting Started
Evaluability Assessment – “preevaluation” – researcher determines whether requisite conditions are present
Support from relevant organizations
What goals and objectives are; how they are translated into program components
What kinds of records or data are available
Who has a direct or indirect stake in the program
Problem Formulation and Measurement 1
Different stakeholders often have different goals and views as to how a program should actually operate
Stakeholders: persons and organizations with a direct interest in the program
Must clearly specify program goals – desired outcomes
Create objectives – operationalized statements
Problem Formulation and Measurement 2
Definition and measurement – specify target/beneficiary population, decide between using current measures or creating new ones
Measure program contexts, outcomes, program delivery
Designs for Program Evaluation
Randomized evaluation designs – avoids selection bias, allows assumption that groups created by random assignment are statistically equivalent; may not be suitable when agency or staff makes exceptions
Caseflow – represents process through which subjects are accumulated into experimental and control groups
Treatment integrity – whether an experimental intervention is delivered as intended; ≈ reliability
Threatened by midstream changes in program
Conditions Requisite for Randomized Experiments
Staff must accept random assignment and agree to minimize exceptions to randomization
Caseflow must produce enough subjects in E and C for statistical tests
Experimental interventions must be consistently applied to E and withheld from C
Need equivalence prior to intervention, and ability to detect .
Evaluation Research and Policy AnalysisChapter 11.docx
1. Evaluation Research and Policy Analysis
Chapter 11
*
Introduction
Evaluation research: refers to a research purpose rather than a
specific method; seeks to evaluate the impact of interventions;
if some result was produced
Problem analysis: designed to help public officials choose from
alternative future actions
Evidence-based policy: actions of justice agencies are linked to
evidence used for planning and evaluation
Evidence generation: nonprofit organizations that document and
evaluate programing to create evidence that can be shared with
others
Appropriate Topics for Evaluation and Problem
AnalysisEvaluation research is appropriate whenever some
policy intervention occurs or is plannedA policy intervention is
action taken for the purpose of producing some intended
resultProblem analysis focuses on deciding what intervention
should be pursued
2. Future oriented
Linking the Process to Evaluation
Are policies being implemented as planned?
Are policies achieving their intended goals?
Evaluation seeks to link intended actions and goals of policy to
empirical evidence that:
Impact assessment: examines whether policies are having the
desired effects
Process evaluation: examines whether policies are being carried
out as planned
Often conducted together
Getting Started
Evaluability Assessment – “preevaluation” – researcher
determines whether requisite conditions are present
Support from relevant organizations
What goals and objectives are; how they are translated into
program components
What kinds of records or data are available
Who has a direct or indirect stake in the program
Problem Formulation and Measurement 1
Different stakeholders often have different goals and views as
to how a program should actually operate
Stakeholders: persons and organizations with a direct interest in
the program
Must clearly specify program goals – desired outcomes
Create objectives – operationalized statements
3. Problem Formulation and Measurement 2
Definition and measurement – specify target/beneficiary
population, decide between using current measures or creating
new ones
Measure program contexts, outcomes, program delivery
Designs for Program Evaluation
Randomized evaluation designs – avoids selection bias, allows
assumption that groups created by random assignment are
statistically equivalent; may not be suitable when agency or
staff makes exceptions
Caseflow – represents process through which subjects are
accumulated into experimental and control groups
Treatment integrity – whether an experimental intervention is
delivered as intended; ≈ reliability
Threatened by midstream changes in program
Conditions Requisite for Randomized Experiments
Staff must accept random assignment and agree to minimize
exceptions to randomization
Caseflow must produce enough subjects in E and C for
statistical tests
Experimental interventions must be consistently applied to E
and withheld from C
Need equivalence prior to intervention, and ability to detect
differences in outcome measures after intervention
Home Detention: Two Randomized Studies
4. Combining home detention with ELMO
Juvenile program paid less attention to delivering program
elements and using ELMO info than adult
Difficult to maintain desired level of control over experimental
conditions
Also difficult when more than one organization is involved
Randomization does not control for variation in treatment
integrity and program delivery; utilize other methods
Quasi-Experimental Designs
No random assignment to E and C
Often “nested” in experimental designs as backups
Ex post evaluation – conducted after experimental program has
gone into effect
Lack built-in controls for selection & other IV threats
You must construct E and C groups as similar as possible
In interrupted time-series designs, note causal process (did/did
not produce change in outcome measure)
Problem Analysis and
Scientific Realism
Realists suggest that similar interventions will have different
outcomes in different contexts
Evaluators should search for mechanisms (IVs) acting in context
(assorted intervening variables) to explain outcomes (DVs)
Centers on problems not particular incidences
Problem-oriented policing: begins by analyzing a number of
incidences and then take steps to address the problem
Problem solving: fundamental took in problem-oriented policing
5. Other Applications of Policy Analysis
Space- and Time-based Analysis – increased prevalence due to
technological advances
Problem solving tools and processes
SARA
SACSI
Political Context of Applied Research
Different stakeholder interests can produce conflicting
perspectives on evaluations
Researcher must identify stakeholders & perspectives
Educate stakeholders on why evaluation should be conducted
Explain that applied research is used to determine what works
and what does not
Political concerns may color evaluation! Be careful!