1
Evaluation Research
and Problem Analysis
•Evaluation Research: Refers to a research
purpose rather than a specific method; seeks to
evaluate the impact of interventions; if some
result was produced
•Problem Analysis: Designed to help public
officials choose from alternative future actions
•Policy Intervention: An action taken for the
purpose of producing some intended result
•Evidence-Based Policy: The actions of justice
agencies are linked to evidence used for planning
and evaluation
2
•Begins with a demand supporting a new course
of action or opposition to existing policy
•Policymakers consider ultimate goals and
actions to achieve those goals
•Outputs - The means to achieve desired goals
•Impacts – Refer to basic question about what a
policy seeks to achieve
•If some policy is taken, then we expect some
result
3
•Are policies being implemented as planned?
•Are policies achieving their intended goals?
•Evaluation seeks to link intended actions and
goals of policy to empirical evidence that:
•Policies are being carried out as planned (process
evaluation)
•Policies are having the desired effects (impact
assessment)
•Often conducted together
4
•Learning policy goals is a key first step in doing
evaluation research
•Evaluability Assessment: “Pre-evaluation” –
researcher determines whether requisite
conditions are present
•Support from relevant organizations
•What goals and objectives are; how they are
translated into program components
•What kinds of records or data are available
•Who has a direct or indirect stake in the program
5
•Different stakeholders often have different goals
and views as to how a program should actually
operate
•Must clearly specify outcomes – program goals
and objectives
•Create objectives – Operationalized statements
•Definition and measurement –
Starget/beneficiary population, decide between
using current measures or creating new ones
•Measure program contexts, outcomes, program
delivery
6
•Randomized Evaluation Designs: Avoids selection
bias, allows assumption that groups created by
random assignment are statistically equivalent;
may not be suitable when agency or staff makes
exceptions
•Case Flow: Represents process through which
subjects are accumulated into experimental and
control groups
•Treatment Integrity: Whether an experimental
intervention is delivered as intended; ≈ reliability
•Threatened by midstream changes in program
7
•Staff must accept random assignment and agree to
minimize exceptions to randomization
•Case flow must be adequate to produce enough
subjects in each group so that statistical tests will be
able to detect significant differences in outcome
measures
•Experimental interventions must be consistently
applied to treatment groups and withheld from control
groups
•Need equivalence prior to intervention, and ability to
detect differences in outcome measures after
intervention
8
•Combining home detention with ELMO
•Juvenile program paid less attention to delivering
program elements and using ELMO info than adult
•Difficult to maintain desired level of control over
experimental conditions
•Also difficult when more than one organization is
involved
•Randomization does not control for variation in
treatment integrity and program delivery; utilize
other methods
9
•No random assignment to Experimental and
Control group
•Often “nested” in experimental designs as
backups
•Lack built-in controls for selection & other
Internal Validity threats
•You must construct Experimental and Control
groups as similar as possible
10
•Ex post evaluation: Conducted after
experimental program has gone into effect
•Full Coverage Programs: Sentencing Guidelines
•Larger Treatment Units: Neighborhood crime
prevention program
•Interrupted time-series designs: Require
attention to different issues because researchers
cannot normally control how reliably the
experimental treatment is actually implemented
•Instrumentation, History, Construct Validity
11
•Problem analysis, coupled with scientific
realism, helps public officials use research to
select and assess alternative courses of action
•Realists suggest that similar interventions will
have different outcomes in different contexts
•Evaluators should search for mechanisms (IVs)
acting in context (assorted intervening variables)
to explain outcomes (DVs)
•Appropriate in small-scale evaluations directed
toward solving a particular problem in a specific
context
12
•Problem Oriented Policing
•Problem solving: A fundamental tool in problem-
oriented policing
•How-To-Do-It Guides: A general guide to
crime analysis to support problem-oriented
policing
•Problem & Response Guides: Describe how to
analyze very specific types of problems and
what are known to be effective or ineffective
responses
13
•Nanci Plouffe and Rana Sampson (2004) began
their analysis of vehicle theft by comparing
Chula Vista to other southern California cities
•Theft rates tended to be higher for cities closer to
the border
•10 parking lots accounted for 25% of thefts & 20% of
break-ins in the city
•6 of the 10 lots were among the top 10 calls-for-
service locations in Chula Vista
•Auto theft hot spots also tended to be hot spots
for other kinds of incidents
14
•Space-and Time-Based Analysis: increased
prevalence due to technological advances
•Crime maps usually represent at least four
different things:
(1) one or more crime types; (2) space or area; (3)
some time period; and (4) some dimension of land
use, usually streets
•Problem solving tools and processes
•Strategic Approaches to Community Safety
Initiatives (SACSI)
15
16
  1998
  1997
  1996
  1995
Block Groups with > 13% Abandoned Buildings
•Different stakeholder interests can produce
conflicting perspectives on evaluations
•Researcher must identify stakeholders &
perspectives
•Educate stakeholders on why evaluation should
be conducted
•Explain that applied research is used to
determine what works and what does not
•Political concerns & ideology may color
evaluation; be careful
17

Chapter13

  • 1.
  • 2.
    •Evaluation Research: Refersto a research purpose rather than a specific method; seeks to evaluate the impact of interventions; if some result was produced •Problem Analysis: Designed to help public officials choose from alternative future actions •Policy Intervention: An action taken for the purpose of producing some intended result •Evidence-Based Policy: The actions of justice agencies are linked to evidence used for planning and evaluation 2
  • 3.
    •Begins with ademand supporting a new course of action or opposition to existing policy •Policymakers consider ultimate goals and actions to achieve those goals •Outputs - The means to achieve desired goals •Impacts – Refer to basic question about what a policy seeks to achieve •If some policy is taken, then we expect some result 3
  • 4.
    •Are policies beingimplemented as planned? •Are policies achieving their intended goals? •Evaluation seeks to link intended actions and goals of policy to empirical evidence that: •Policies are being carried out as planned (process evaluation) •Policies are having the desired effects (impact assessment) •Often conducted together 4
  • 5.
    •Learning policy goalsis a key first step in doing evaluation research •Evaluability Assessment: “Pre-evaluation” – researcher determines whether requisite conditions are present •Support from relevant organizations •What goals and objectives are; how they are translated into program components •What kinds of records or data are available •Who has a direct or indirect stake in the program 5
  • 6.
    •Different stakeholders oftenhave different goals and views as to how a program should actually operate •Must clearly specify outcomes – program goals and objectives •Create objectives – Operationalized statements •Definition and measurement – Starget/beneficiary population, decide between using current measures or creating new ones •Measure program contexts, outcomes, program delivery 6
  • 7.
    •Randomized Evaluation Designs:Avoids selection bias, allows assumption that groups created by random assignment are statistically equivalent; may not be suitable when agency or staff makes exceptions •Case Flow: Represents process through which subjects are accumulated into experimental and control groups •Treatment Integrity: Whether an experimental intervention is delivered as intended; ≈ reliability •Threatened by midstream changes in program 7
  • 8.
    •Staff must acceptrandom assignment and agree to minimize exceptions to randomization •Case flow must be adequate to produce enough subjects in each group so that statistical tests will be able to detect significant differences in outcome measures •Experimental interventions must be consistently applied to treatment groups and withheld from control groups •Need equivalence prior to intervention, and ability to detect differences in outcome measures after intervention 8
  • 9.
    •Combining home detentionwith ELMO •Juvenile program paid less attention to delivering program elements and using ELMO info than adult •Difficult to maintain desired level of control over experimental conditions •Also difficult when more than one organization is involved •Randomization does not control for variation in treatment integrity and program delivery; utilize other methods 9
  • 10.
    •No random assignmentto Experimental and Control group •Often “nested” in experimental designs as backups •Lack built-in controls for selection & other Internal Validity threats •You must construct Experimental and Control groups as similar as possible 10
  • 11.
    •Ex post evaluation:Conducted after experimental program has gone into effect •Full Coverage Programs: Sentencing Guidelines •Larger Treatment Units: Neighborhood crime prevention program •Interrupted time-series designs: Require attention to different issues because researchers cannot normally control how reliably the experimental treatment is actually implemented •Instrumentation, History, Construct Validity 11
  • 12.
    •Problem analysis, coupledwith scientific realism, helps public officials use research to select and assess alternative courses of action •Realists suggest that similar interventions will have different outcomes in different contexts •Evaluators should search for mechanisms (IVs) acting in context (assorted intervening variables) to explain outcomes (DVs) •Appropriate in small-scale evaluations directed toward solving a particular problem in a specific context 12
  • 13.
    •Problem Oriented Policing •Problemsolving: A fundamental tool in problem- oriented policing •How-To-Do-It Guides: A general guide to crime analysis to support problem-oriented policing •Problem & Response Guides: Describe how to analyze very specific types of problems and what are known to be effective or ineffective responses 13
  • 14.
    •Nanci Plouffe andRana Sampson (2004) began their analysis of vehicle theft by comparing Chula Vista to other southern California cities •Theft rates tended to be higher for cities closer to the border •10 parking lots accounted for 25% of thefts & 20% of break-ins in the city •6 of the 10 lots were among the top 10 calls-for- service locations in Chula Vista •Auto theft hot spots also tended to be hot spots for other kinds of incidents 14
  • 15.
    •Space-and Time-Based Analysis:increased prevalence due to technological advances •Crime maps usually represent at least four different things: (1) one or more crime types; (2) space or area; (3) some time period; and (4) some dimension of land use, usually streets •Problem solving tools and processes •Strategic Approaches to Community Safety Initiatives (SACSI) 15
  • 16.
    16   1998   1997  1996   1995 Block Groups with > 13% Abandoned Buildings
  • 17.
    •Different stakeholder interestscan produce conflicting perspectives on evaluations •Researcher must identify stakeholders & perspectives •Educate stakeholders on why evaluation should be conducted •Explain that applied research is used to determine what works and what does not •Political concerns & ideology may color evaluation; be careful 17