Your SlideShare is downloading. ×
Beyond Scaling Up: Approaches to Evaluation
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Beyond Scaling Up: Approaches to Evaluation

613
views

Published on

This presentation was given at the 'Beyond Scaling Up: Pathways to Universal Access' workshop which was held at the Institute of Development Studies, Brighton on the 24-25 May, 2010. This event was …

This presentation was given at the 'Beyond Scaling Up: Pathways to Universal Access' workshop which was held at the Institute of Development Studies, Brighton on the 24-25 May, 2010. This event was co-sponsored by the Future Health Systems Research Programme Consortium and the STEPS Centre. Lucas presented on approaches to evaluation.

Published in: Health & Medicine

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
613
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
8
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Beyond Scaling Up Approaches to evaluation in complex and rapidly changing situations
  • 2. Evidence based policy and practice
    • Long history of clinical trials to decide: ‘what works?’ or even ‘what works best?’
    • Experience with health-systems research much less encouraging – interesting individual studies but limited accumulated knowledge
    • Simple medical interventions involve complex social interventions
      • The ‘implementation is the intervention’
      • Context (including historical context – path dependency)
      • Outcomes dependent on detailed processes & pathways
  • 3. Who needs what evidence?
    • Multiple stakeholders: implementers, providers, regulators, civil society, community groups, local governments, national governments, funding agencies
    • Need very different types of ‘evidence’ to address a variety of concerns: attribution, accountability, justification, learning, management, etc.
    • Many stakeholders may have limited understanding as to what types of information would be most useful to them .
  • 4. Monitoring, learning and evaluation
    • MLE process needs to:
      • Understand priorities of different stakeholders
      • Make objectives transparent
      • Be aware of trade-offs between objectives
      • Make a realistic assessment of what can be achieved and allocate resources accordingly
    • No single ‘right way’ to undertake MLE – depends on balancing multiple objectives.
    • Three main paradigms: experimental, theories of change, realistic
  • 5. Experimental
    • Randomised controlled trials provide the only scientific approach to the evaluation of an intervention.
    • Sometimes not possible to undertake RCTs, in which case we should approximate the RCT benchmark as closely as possible.
    • This requires the careful construction of a counterfactual which involves at least (1) the identification of intervention and control groups and (2) baseline and evaluation studies.
    • Recent debates consider value of:
      • ‘ explanatory’ RCTS – designed to test specific hypothesis in a highly controlled context
      • ‘ practical’ RCTS – designed to identify interventions that might produce beneficial outcomes in practice
  • 6. Theories of Change
    • Randomised controlled trials would be the best option in an ideal world but it will usually be impossible to employ them in practice.
    • With or without RCTs, it is essential that we focus not simply on whether an intervention succeeded or failed but why.
    • By devoting sufficient resources to developing a shared understanding as to how an intervention is intended to work, we can design monitoring systems that will allow us to evaluate how and to what extent observed outcomes can be plausible attributed to the intervention – implementation theory .
    • The use of a RCT or well constructed counterfactual may be very helpful in this task.
  • 7. Realistic
    • The interventions with which we are concerned are multi-faceted and highly malleable, with multiple components that will be modified by local contexts (path dependency, emergent behaviour)
    • Those who participate in the implementation process, including intended beneficiaries and intervention managers, will have a diverse range of characteristics, perceptions and attitudes that shape their responses to the various intervention components.
    • Placebo effects – positive or negative responses to the fact of the intervention – will typically be large and uncontrollable.
    • The external environment within which the intervention is made will also inevitably give rise to unforeseen effects that vary over the intervention period.
  • 8. Realistic
    • Given this reality, it is essentially irrational to seek for evidence that given types of intervention ‘work’.
    • The use of RCT or ‘quasi-experimental’ designs is a waste of time and resources in terms of systematic learning.
    • What can be usefully achieved is to seek out the most interesting specific elements of the intervention and explore how they have performed in relation to specific groups of individuals.
    • This will allow the construction of programme theories that genuinely advance our knowledge and can be used in the modification of the current intervention or design of the next.
  • 9. Some questions
    • Are the expectations of MLE too high (given typical level of resources)?
    • How should limited resources be allocated to different objectives: e.g. attribution versus learning?
    • How should objectives link to methodology (do multiple objectives imply multiple methodologies)?
    • Evaluator as expert or facilitator?
    • How do we balance the ‘objectivity’ of external evaluators against the in-depth knowledge of internal evaluators?