2010 bush foundation inputs, process indicators, and intermediate outcomes


Published on

Published in: Education
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

2010 bush foundation inputs, process indicators, and intermediate outcomes

  1. 1. Inputs, Process Indicators, & IntermediateOutcomes: Tools for Answering Evaluation Research Questions Session 3 Steve Kimball, Tony Milanowski, & Chris Thorn Value-Added Research Center University of Wisconsin – Madison Bush Foundation Teacher Effectiveness Initiative January 12 & 13, 2010 Minneapolis, MN
  2. 2. Session Goals• Consider useful alternatives to experimental or quasi-experimental designs – For mid-course corrections – To provide some evidence of program effects• Review & help you develop measures of processes & intermediate outcomes
  3. 3. Typical Situation• Evaluation starts after program implementation has begun – No pre-implementation data on some key variables, such as instructional practice• No randomization of treatment – Educator choice or assignment based on perceived need – Hard to find equivalent control group• Limited resources
  4. 4. Basic Design Elements• Mixed methods process tracing – How did the purported causal chain play out?• Non-equivalent but policy-relevant comparison groups• Implementation variation – If within-project variation was present, was that variation related to outcomes?
  5. 5. Logic for Pattern-Match Design• Program was well implemented• Expected intermediate outcomes occurred• Outcome measures changed in intended direction• Conclusion: theory of action appears to be operating & changes in outcomes likely due to program
  6. 6. Basic Design Intermediate OutcomesImplementation Educator Behavior•Fidelity•Variation Reactions Change Outcome Change Context
  7. 7. Potential Non-equivalent Comparison Groups• Non-implementers• District average• Benchmark schools• State average for demographically similar schools/districts
  8. 8. Comparing Trends I 45 40 35 30 25 Comparison Group 20 Project Schools 15mMOousaecrt 10 5 0 -6 -4 -2 0 2 4 6 Years Before/After Project Start
  9. 9. Comparing Trends II Program Schools Turnover Rate -Other 14% 12% 10% 8% 6%RonuTaevrt 4% 2% -5 -4 -3 -2 -1 0 1 2 3 4 5 Yrs Before/After Program Start
  10. 10. Measuring Inputs, Processes, & Intermediate Outcomes
  11. 11. Logic Model
  12. 12. Measuring Implementation• Key processes from logic model• Supplement by reviewing common implementation problems: – Educators do not understand the program – Lack of stakeholder support – Conflicting/competing initiatives – Supporting systems not in place
  13. 13. Implementation Measures: Inputs• Spending patterns & costs – Were funds spent as planned? – Was funding provided sufficient to support planned activities?• Staffing: adequacy & continuity; champion• Stakeholder attitudes• Context features that would work with/against program• Data system adequacy (data quality plan)
  14. 14. Implementation Measures – Activities & Outputs• Quantitative – # of communication sessions/newsletters/web site hits – % of schools in which leaders communicated program to staff – % of target educators receiving communications – % of schools with program coordinators in place & trained – % of teachers with valid teacher-student link – Change in attributes of incoming teacher candidates – Planned vs. actual $ spent on in school supports – Actual vs. planned warranty costs (differentiation)• Data sources: grant application, budgets, self-assessments, annual reports, administrative data
  15. 15. Implementation Measures – Activities & Outcomes• Qualitative Examples – Educators have access to clear & complete description of program via multiple channels – Performance measures actually used to evaluation teachers and programs were the same as in the design – Process in place & used to correct performance measurement or mentoring problems – Alignment of program with potentially competing/conflicting initiatives• Data sources: interviews, document review (grant applications, self-assessments, annual reports, meeting agendas & minutes)
  16. 16. Implementation Rubrics, Indices & Scorecards• Combine quantitative output measures & qualitative assessments to summarize quality of implementation & implementation fidelity on key dimensions• Provide a way to translate judgments into numbers for use in statistical analyses (e.g., is variation across schools related to intermediate or long range outcomes?)• Examples – TAP Implementation Standards – Berends, Bodilly, & Kirby, 2002, Chapter 4 (RAND study of New American Schools)
  17. 17. Measuring Intermediate Outcomes: Educator Reactions• Priority Areas – Program understanding – Warranty costs – Performance-reward contingency – Effort-performance link – Fairness
  18. 18. Educator Reactions Surveys• Motivation Model Elements• Motivational Responses• See “Short Form Teacher Reaction Survey”• Other examples:• http://www.performanceincentives.org/data/files/news/BooksNe• http://www.performanceincentives.org/data/files/directory/Conf
  19. 19. Educator Reactions Interview• Find out what higher education faculty, schools & teachers are doing in response to the program• Uncover unexpected consequences• Facilitate survey development• Some examples of what we have learned via interviews
  20. 20. Is motivation taking place?Interview Questions – How has the program redesign affected your teaching? – How has the program affected how your school is run? – Has the introduction of a warranty changed how you focus your efforts? – Have you done anything differently in order to improve your chances of receiving the incentive? – Has the warranty motivated you to work harder or change the focus of your efforts?
  21. 21. Measuring Intermediate Outcomes: Behavior Change• Surveys• Focus groups• Interviews• Observations• Unobtrusive Measures
  22. 22. Measuring Intermediate Outcomes: Behavior ChangeSurveys of instructional practice are attractive, but hard to do right.• Need at theory of instruction• Content specificity• Response Problems – Social desirability/demand effects – Memory – Can most educators recognize depth of practice change?• Can you build a confirmatory survey based on interviews or focus groups?
  23. 23. Potential Interview Questions• Principals/other school leaders – Have teachers been doing anything differently in the classroom since the program began?• Teachers – Have your school administrators been doing anything differently since the program began?• Central office administrators – Have the school administrators you work with been doing anything differently since the program began?
  24. 24. Unobtrusive Measures• Change in PD demand (volume, content)• Curriculum material purchases• Scheduling changes/time allocations• Staffing changes• Staff meeting agendas• School improvement plans
  25. 25. Your Turn• Work on developing process & intermediate outcome measures that fit your program• Share innovative ways you have used to measure inputs, processes, & intermediate outcomes