Innovative Synthesis Methodologies
Donna Smith-Moncrieffe
Canadian Evaluation Society Conference
Fairmont Royal York, Toronto
June 9-12, 2013
2
Summary of the Presentation
 NCPC Mandate
 Government Context
 Evaluation considerations at the corporate level (Performance)
 Evaluation considerations at the project’s level (Knowledge products)
 Overview of 3 Types of synthesis methods
 Key points
 Challenges
 Case Study (Step by Step Process)
 Collating data from projects with different target groups, interventions and outcomes
 Synthesis tables
 Providing reporting options for summarizing various statistical results: percentages,
Effect Sizes, T tests and F coefficients
 Conclusions
3
Government Context: Evaluation Approaches
4
Why did we need to Synthesize the Data?
● We needed to respond to the questions about, “what works in crime
prevention”
● We needed to provide findings and conclusive statements about 13
programs (total of 36 project sites) with different:
- Target groups;
- Interventions;
- Outcomes; and
- Evaluation designs
● Single site repeated design
● Quasi-experiment design (sometimes with a matched
comparison group)
5
Types of Synthesis Methods
6
Synthesis Method Requirements for quality
and Rigor
7
What approach did we use to Synthesize 13
Evaluations?
8
Step 1: Identify Key Questions
What behaviour related changes were made (i.e. changes in violent
offending, police contact and victimization?
­ What percentage of projects made favorable changes in relation to
the criminal justice system (i.e. reduced violent offending, police
contacts etc..)?
­ How much change was made?
­ What contextual and program related factors contributed to these
changes (i.e. geographical location, partnerships, resources etc..)?
9
Step 2: What are the inclusion criteria?
● In the Government context, we need to use as many evaluation studies
as possible in the synthesis; even when they do not meet rigid
evaluation standards.
● Evaluation studies that met the following minimum requirements were
included in the synthesis:
­ Study uses quantitative data (descriptive or inferential statistics);
­ Measures that correspond with key crime related outcomes (i.e.
association with anti­social peers, emotional regulation, substance
abuse, offending, education, employment etc..);
­ Study used at least the basic evaluation design (single group pre­
post test repeated measures design); and
­ At least pre and post test measures had been available
10
Step 3: Enter the Data (Partial Sample)
Name of
Study
Description
of Variable
Sample
Size
Result P Level Type of
Change
Level of
Study
Project
Intervention
Toronto
Arrests E=76
C=43
F=0.291 P=.823 No
Change
Exemplary
Multi-
Systemic
Therapy
Youth not
arrested for
an offence
E=28
C=12
E=89.3%
C=66.7%
N/A Favorable Basic
Towards No
Drugs
Weapon
Carrying
E=847
C=54
F=0.63 P<0.01 Favorable Notable
Velocity Police
contacts
E=87 69%
(reduction)
N/A Favorable Notable
Interpretation of Results
11
Step 4: Analysis and Interpretation
Synthesis for 9 evaluation studies measuring changes in criminal
behaviours
Overall interpretation
Background Information
•Number of evaluation studies: 9
•Number of sites: 11
•Common measures include: 1) Arrests; 2) Non-Violent Offending; 3) Violent
offending; 4) Criminal Victimization; and 5)Weapon Carrying
Quantitative results
•% of change by sub-measure: 50%
•% of projects demonstrating favorable change (showing at least one stat.
significant result at the project level: 75%
•% of projects demonstrating unfavorable change:0%
•% of projects demonstrating no change: 25%
Qualitative results
•Provide qualitative results by model, site or project level to qualify the quantitative
findings
12
Step 5 : Reporting Aggregate Data
13
Step 5: Reporting Aggregate Data
(Sample of Approach # 1)
14
Step 5: Reporting (Sample of Approach #2
reporting on magnitude of change )
15
Conclusions
● What synthesis method works best when the evaluation studies have a
variety of interventions, outcomes and statistical measures?
­ Elements of the Realist Synthesis review and Multiple­case Study
Synthesis are recommended; and
­ NCPC utilized elements of both of these synthesis methods but could
benefit from responding to the Why questions incorporated in the
Multiple­ Case Study Synthesis
● What reporting method works best to summarize non-standard data?
­ Approach #1 and # 2 are both useful; and
­ Assess your audience to determine the best reporting method
16
Conclusions cont’d
● What planning activities would be required to utilize both Realist and
Multiple Case study Methods?
­ Develop strategic direction (strategic plan to help focus the direction of
the evaluation);
­ Prioritize key areas to minimize complexities;
­ Ensure that the information management system is designed to:
● Clarify the unit of analysis (by model, by project, evaluation
questions, etc..);
● Respond to How and Why questions such as: How does the
program produce these effects or How does the intervention
contribute to change?; and
● Collect data about the context for each model, site or project.
17
References for Recommended Synthesis
Methods
18
19
Thank You
Contact:
Donna Smith­Moncrieffe, BSc. Crim Dip, MSc.
Senior Evaluation Advisor
Public Safety Canada, National Crime Prevention Centre
Policy, Research and Evaluation
Tel: 416­952­0423
Fax: 416­952­0483
Email: donna.smith­moncrieffe@ps­sp.gc.ca
Web : www.PublicSafety.gc.ca | www.SecuritePublique.gc.ca

Innovative Synthesis Methods

  • 1.
    Innovative Synthesis Methodologies DonnaSmith-Moncrieffe Canadian Evaluation Society Conference Fairmont Royal York, Toronto June 9-12, 2013
  • 2.
    2 Summary of thePresentation  NCPC Mandate  Government Context  Evaluation considerations at the corporate level (Performance)  Evaluation considerations at the project’s level (Knowledge products)  Overview of 3 Types of synthesis methods  Key points  Challenges  Case Study (Step by Step Process)  Collating data from projects with different target groups, interventions and outcomes  Synthesis tables  Providing reporting options for summarizing various statistical results: percentages, Effect Sizes, T tests and F coefficients  Conclusions
  • 3.
  • 4.
  • 5.
    Why did weneed to Synthesize the Data? ● We needed to respond to the questions about, “what works in crime prevention” ● We needed to provide findings and conclusive statements about 13 programs (total of 36 project sites) with different: - Target groups; - Interventions; - Outcomes; and - Evaluation designs ● Single site repeated design ● Quasi-experiment design (sometimes with a matched comparison group) 5
  • 6.
  • 7.
    Synthesis Method Requirementsfor quality and Rigor 7
  • 8.
    What approach didwe use to Synthesize 13 Evaluations? 8
  • 9.
    Step 1: IdentifyKey Questions What behaviour related changes were made (i.e. changes in violent offending, police contact and victimization? ­ What percentage of projects made favorable changes in relation to the criminal justice system (i.e. reduced violent offending, police contacts etc..)? ­ How much change was made? ­ What contextual and program related factors contributed to these changes (i.e. geographical location, partnerships, resources etc..)? 9
  • 10.
    Step 2: Whatare the inclusion criteria? ● In the Government context, we need to use as many evaluation studies as possible in the synthesis; even when they do not meet rigid evaluation standards. ● Evaluation studies that met the following minimum requirements were included in the synthesis: ­ Study uses quantitative data (descriptive or inferential statistics); ­ Measures that correspond with key crime related outcomes (i.e. association with anti­social peers, emotional regulation, substance abuse, offending, education, employment etc..); ­ Study used at least the basic evaluation design (single group pre­ post test repeated measures design); and ­ At least pre and post test measures had been available 10
  • 11.
    Step 3: Enterthe Data (Partial Sample) Name of Study Description of Variable Sample Size Result P Level Type of Change Level of Study Project Intervention Toronto Arrests E=76 C=43 F=0.291 P=.823 No Change Exemplary Multi- Systemic Therapy Youth not arrested for an offence E=28 C=12 E=89.3% C=66.7% N/A Favorable Basic Towards No Drugs Weapon Carrying E=847 C=54 F=0.63 P<0.01 Favorable Notable Velocity Police contacts E=87 69% (reduction) N/A Favorable Notable Interpretation of Results 11
  • 12.
    Step 4: Analysisand Interpretation Synthesis for 9 evaluation studies measuring changes in criminal behaviours Overall interpretation Background Information •Number of evaluation studies: 9 •Number of sites: 11 •Common measures include: 1) Arrests; 2) Non-Violent Offending; 3) Violent offending; 4) Criminal Victimization; and 5)Weapon Carrying Quantitative results •% of change by sub-measure: 50% •% of projects demonstrating favorable change (showing at least one stat. significant result at the project level: 75% •% of projects demonstrating unfavorable change:0% •% of projects demonstrating no change: 25% Qualitative results •Provide qualitative results by model, site or project level to qualify the quantitative findings 12
  • 13.
    Step 5 :Reporting Aggregate Data 13
  • 14.
    Step 5: ReportingAggregate Data (Sample of Approach # 1) 14
  • 15.
    Step 5: Reporting(Sample of Approach #2 reporting on magnitude of change ) 15
  • 16.
    Conclusions ● What synthesismethod works best when the evaluation studies have a variety of interventions, outcomes and statistical measures? ­ Elements of the Realist Synthesis review and Multiple­case Study Synthesis are recommended; and ­ NCPC utilized elements of both of these synthesis methods but could benefit from responding to the Why questions incorporated in the Multiple­ Case Study Synthesis ● What reporting method works best to summarize non-standard data? ­ Approach #1 and # 2 are both useful; and ­ Assess your audience to determine the best reporting method 16
  • 17.
    Conclusions cont’d ● Whatplanning activities would be required to utilize both Realist and Multiple Case study Methods? ­ Develop strategic direction (strategic plan to help focus the direction of the evaluation); ­ Prioritize key areas to minimize complexities; ­ Ensure that the information management system is designed to: ● Clarify the unit of analysis (by model, by project, evaluation questions, etc..); ● Respond to How and Why questions such as: How does the program produce these effects or How does the intervention contribute to change?; and ● Collect data about the context for each model, site or project. 17
  • 18.
    References for RecommendedSynthesis Methods 18
  • 19.
    19 Thank You Contact: Donna Smith­Moncrieffe,BSc. Crim Dip, MSc. Senior Evaluation Advisor Public Safety Canada, National Crime Prevention Centre Policy, Research and Evaluation Tel: 416­952­0423 Fax: 416­952­0483 Email: donna.smith­moncrieffe@ps­sp.gc.ca Web : www.PublicSafety.gc.ca | www.SecuritePublique.gc.ca

Editor's Notes

  • #6 Limitations: Most of the evaluations had only completed three quarters of the implementation to date. (These are not based on final reports)
  • #7 Discuss the possibilities of using meta-analysis
  • #8 All good synthesis methodology should include these types of triangulation.
  • #10 The NCPC had questions related to Knowledge, Attitudes, Skills, and Risk And Protective factors, however, for the purposes of this presentation we are only sharing questions related to behavioural change
  • #11 Evaluation studies are expected to measure pre, post-program, 6 month and 1 year follow up of project participants
  • #12 E =experimental group C=comparison group
  • #13 To determine the % of sub-measures that were favorable, the following calculation was used: (# of favorable sub-measures/Total # of sub-measures in each table) To determine the % of projects that demonstrated favorable change in each table, the following analysis was conducted. Calculate the most frequently occurring result (i.e favorable, unfavorable or no change result) for each unique program. The result that occurred most frequently determined the result for that program. For example, if 1 program had 4 sub-measures related to anger, with 3 of the 4 sub-measures showing a favorable change, that program would be assigned the “favorable label”. Programs that had an equal ratio of favorable to unfavorable or no change result were excluded from the sub-analysis as an accurate method to identify one label could not be identified. An overall calculation was conducted as follows: # of favorable program results/Total # of program results The % of projects demonstrating a favorable change for the short term, intermediate and long term could be achieved using this method. The appendix section provides reference information for details about the individual studies. Further details explaining how the sites are selected for an impact evaluation are also included in the Appendix section.
  • #14 In the NCPC case study, the only common variables identified included the following concepts: Favorable change: a result that demonstrates a statistically significant (p&lt;0.05) or clinically significant positive change Non-Favorable change: a result that demonstrates a negative change in the outcomes being measured No Change: a result that does not demonstrate a statistically significant change between the pre and post test follow up periods
  • #15 Note the sample size for youth having behavioural changes is smaller. This would explain the relatively higher percentage of projects demonstrating reductions. There are approximately 33 outcomes (indicators) that form the basis for the immediate outcomes and approximately 6 outcomes that form the basis for long term outcomes. The relatively smaller number of long term outcomes demonstrated more favorable change and that is why there is a relatively more favorable change when compared to the intermediate outcomes Intermediate outcome examples include: substance abuse, employment, family attachment, withdrawal, aggression, anger management, rule breaking, self-esteem Long term outcomes involve: police contact, arrests, victimization non-violent offending and violent offending
  • #16 Fortunately we have some structure for categorizing effect sizes and eta-squared: Cohen d 0.2=low effect size 0.5=moderate effect size 0.8=large effect size Eta-Squared (ANOVA-if F ratio is statisticialy ) .09=low strength and magnitude .14=moderate strength and magnitude .22=large strength and magnitude Eta-squared value reflects the strength and magnitude related to a main or interaction effect. Eta-squared is very similar to r squared in a regression equation (it indicates the amount of contribution the variables or intervention has in the outcome)