Evaluation for Development: matching evaluation to the right user, the right results, and the right approach

349 views

Published on

This panel starts from the premise that development evaluation can do more to contribute to development goals. It explores matching evaluation to appropriate users, appropriate articulation of results, and appropriate methods to what is being evaluated, since evaluation is itself an intervention to support better policies and programs.
While donors typically control evaluation agendas, grantees may be better placed to commission and use evaluations. We will present experiences of handing over control of evaluation to grantees, with practical and political issues that arise.
In development as elsewhere, agencies are frustrated when evaluation does not accurately capture the results they aim to achieve. Often simple metrics and methods are inadequate in complex systems. We will describe challenges of articulating results appropriately so evaluation doesn’t miss, let alone undermine, results. We will also share experiences of using complex systems approach to assessing outcomes to match the values and purpose of the evaluand.

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
349
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Sanjeev Sridharan (keynote, call to action) is the Director of the Evaluation Centre for Complex Health Interventions at the Li KaShingKnowledge Institute at Toronto’s St. Michael’s Hospital. - Global Health Research Partnership ProgramThis panel addresses the strands and leading questions of the 2013 conference:Key question – Shelia (Von Sychowski Conference Chair), how evaluation can shape positive change, this morning Boris’ inspirational call to action to swallow the orange pill – push evaluation to help accelerate the impacts of projects, capture potential for transformative changeStrands -This panel critically assesses the role of those who are typically evaluated (grantees) and those who typically commission evaluations. suggests that those roles should be challenged, and explores the implications of doing so +++
  • Evaluation is itself an interventionIn IDRC’s context, evaluation is an intervention to support and further the mission of IDRC, which is to support research that influences improvements in the health and well-being of people in developing countries.From our experience, we’ve identified three types of mismatches can lead to evaluation missing its potential of being a really positive intervention: Utility of the evaluation not being matched to the right user; measures used in an evaluation not adequately matching desired results; and evaluation approaches not matching the nature and orientation of the programming under review.By examining our practice, and trying some new things in evaluation, we are trying to find ways in which evaluation can do a better job of supporting development. In this panel, you’ll hear a few undertones:How we think evaluation can make a contribution to the results we are trying to achieveWhy we like evaluationOn-going aspirational process - that we haven’t figured this allout, and we very open to receiving constructive critique and questions from youAnd that we expect that you, in your practice, may be dealing with similar issues as what we’ll describe; we’d love to hear about that in the discussion that follows.  We should begin with a very brief introduction of the International Development Research Centre…
  • Organizational context:IDRC - Canadian crown corporation, with HQ in Ottawasupports research in developing countries to promote growth and development, eg:explores the positive and negative impacts of widespread access to mobile telephones and the Internet funds research that helps to redress health inequities and improve health services, systems, and policies,What evaluation looks like at IDRC:Decentralised, evaluation is a shared responsibilitySome evaluation happens in a very routine manner – eg, all of our programs are evaluated in a systematic way on regular cycles and serve a primarily accountability purpose, and at the project level, the decision to evaluate is flexible and based on utility and strategic decisionsMandate to support research on evaluation directly related to programming needs at the Centre
  • Evaluation Field Building in South Asia: Reflections, Anecdotes, and Questions. American Journal of Evaluation“The interest of funding agencies in evaluation has been too narrow for too long, generally emphasizing evaluation of development to the exclusion of evaluation for development.” She goes on to say that his has been coupled with an even more limiting tendency of donors to only focus on evaluation of “their” projects, with limited interest in building capacity in evaluation, and handing over control for evaluation.Evaluation is very prominent in many of the current debates on development effectiveness. However, the critical question of - evaluation for whom and by whom? - is only on the periphery of this debateSignificant momentum around this with the EvalPartners initiative, donor practise still have a way to goCollective impact
  • Approaches we have been experimenting with – real spectrum in practise (intentionality, willingness to really test boundarieshand-over or share the evaluation agenda and build evaluation capacity
  • 1.(weakest/most problematic – but there is potential) grantee-managed but high risk that these evaluaitons generally reflects a donor agenda – typically budget line is created in the development of the project, evaluation is conducted because it is part of the plan, typically users are both IDRC and research team – grantee use is within the boundaries of donor use (single project focus)Tend to have quality problems – routine QA of all evaluations of IDRC supported work. Last year we did a review of the quality data from the previous 5 years and found that in general, grantee-commissioned evaluations tended to be of lower quality (often assess single projects, conducted at the end-of-project cycle) – utility weakness, users not identified and user participation is weak2. Increased focus on use by grantee- evaluation of major program expected outcome area, decided to do a case study of “flagship” projectResearch partner/grantee also expresses interest in having an externally validated view of its work and learn as an organizationNext 2 examples – getting more serouse about handing over that control…
  • Might be familiar with – Ricardo Ramirez and Dal Brodhead presented Developing Evaluation Capacity in ICT4D (DECI)Action-research project with an evaluation capacity development objective, findings ways to make UFE relevant to a set of very different research teamsOffer ICT4D researchers the option of learning UFE by applying it to their research projects – help them develop their own evaluations using UFE (be the primary users of the evaluation instead of implementing evaluations imposed by a funding organization) - RESOURCESEach evaluation was used by the managers and researchers in each project – step 11 of UFE calls for coaching in the use of evaluation findings, also primary intended users took ownership and had a stake in the findings
  • Large (19 M), 6 yr collaborative project between Gov of Nigeria, IDRC and CIDAGetting the health information system component of the health system to work proactively and positivelyImprove planning, access and utilization of primary health services delivery – and in turn, lead to improved health outcomes in 2 statesCORE – building the habit of evidence based planningEarly discussions on what this evaluation could look likeKey evaluation question posed – what is the value added of the NEHSI approach for strengthening health systems?Best approach to answering this question – not about evaluating the project per se, but understanding from the project what it takes to strengthen health systems in a sustainable wayAlso concluded that the best perspective for asking this question is not from the project, but from the perspective of decision-makers and implementers in the health system. (local ownership is critical, enabling scale-up or Set us on path of pursuing a country-led evaluation…Space to be creative – IA of one component built in from the beginning, CIDA managed evaluationStruggle – identify who could commission this and who could do this, watching and waiting and keeping this idea alive “mindfully opportunistic”, launch this once the conditions were rightPIUs – PAC (diverse user group - representatives from state level, federal level MoH, research collaborators, civil society, both funders) this means that the evaluation must be designed and carried out around the needs, values and intended uses of the PAC. As well, the PAC will be responsible for defining the evaluation terms of reference, engaging in the process of the evaluation, and responsible for using the evaluation process and findings to inform their decisions and actions – TORs that were negotiated with them - also agreed to be ambassadors Absolutely critical to have a Nigerian evaluation team – strengthening in-country evidenced-based decision making, evaluation use is part of that and there is a supply and demand side of that equationProposed using OH – pioneered by Ricardo Wilson-Graucollects evidence of what has been achieved, and works backward to determine whether and how the project contributed to the change
  • These efforts required us to be intentional, proactive and facilitative- Negotiated – important to recognize and address power dynamics – funder-grantee, internal organizational, politicalFinding the appropriate users, take on the role (TORs), facilitating their role – preliminary TORs (options and questions), build ownershipRegional evaluation capacityLearning/experimental agenda – but not without grounded purpose - at the same time concerned with quality and capacityAccountability mechanisms were built in – had the space to be creativeUFE & OH created an enabling structureSupport for users, evaluators throughout processTimelines are important – opportunistic and flexibleTensions = support and advance process while releasing control of content - DECI – none of the grantees chose to focus on work beyond their work that was funded by IDRC given the choiceNEHSI – idea of a comparative assessment (multiple donors)Focus of evaluation lens?DECI had great success, DECI II
  • INSTRUCTIONS:The font is Calibri (Body) , black.Sizes of each line is as follows:Title of Presentation –  32pt.Presenter’s Name –  24pt.Place – 18pt.Date – 14pt.
  • INSTRUCTIONS:The font is Calibri (Body), black.Sizes of each line is as follows:Title – Calibri (Body) 28pt.Body Text – Calibri (Body) 22pt.
  • Results are articulated differentlyRelative emphasis depends on the state of the field, and what programs see is neededBeyond these descriptions of what field building results are, the interventions vary, and how far the program thinks it can get to in a 5 year period varies enormously.This though would be the beginning for a conversation across programs to highlight similarities, sharpen differences, and perhaps ultimately come to an organizational-level framework for evaluating field biulding
  • INSTRUCTIONS:The font is Calibri (Body), black.Sizes of each line is as follows:Title – Calibri (Body) 28pt.Body Text – Calibri (Body) 22pt.
  • Evaluation for Development: matching evaluation to the right user, the right results, and the right approach

    1. 1. Evaluation for Development:matching evaluation to the right user, the rightresults, and the right approachSanjeev Sridharan, The Evaluation Centre for Complex Health InterventionsTricia Wind & Amy Etherington, International Development Research CentreCES Conference 2013
    2. 2. 2Key messages:• Evaluation is an intervention• Mismatches:• utility not matched to a key user• measures not matched to desired results• approaches not matched to the nature andorientation of the programming under review• Evaluation can do more!
    3. 3. Organizational context:International Development Research Centre supportsresearch in developing countries to promote growth anddevelopmentIDRC’s approach to evaluation:• Shared responsibility• Routine + strategic• Accountability + learning• Research on evaluation3
    4. 4. 4“The interest of funding agencies in evaluation hasbeen too narrow for too long, generally emphasizingevaluation of development to the exclusion ofevaluation for development”-Katherine Hay, 2010• Evaluation for whom and by whom?
    5. 5. 5Grantee-led evaluation4 approaches:• Grantee-managed• Collaboratively commissioned• “Learning by doing” capacity building• Facilitated “handover”
    6. 6. 61. Grantee-managed evaluation• Typically donor - driven• Quality concerns2. Collaboratively commissioned• Increased focus on use by granteeExample: Users and intended uses of a co-commissioned evaluationIDRC Program team – integrate lessons into programming and activities; share learningwith other projects; and feed into the upcoming external evaluationGrantee organization – better understand the conditions for success as well as obstacles inorder to improve decision-making and programming; accountability to the Board.
    7. 7. 73. “Learning by doing” capacity buildingDeveloping Evaluation Capacity in ICT4D (DECI)• Action-research project - apply Utilization FocusedEvaluation to research projects• Experiment – ensure a group of researchers had thehuman, financial, and technical resources required tobe primary evaluation users• Increased evaluation capacities of grantees, internalevaluators, regional evaluators• High-quality evaluations were conducted and used
    8. 8. 84. Facilitated “handover”Nigeria Evidence-based Health System Initiative (NEHSI)Country-led Evaluation• “What is the value added of the NEHSI approach forstrengthening health systems?”• Perspective of decision-makers and implementers inthe health system• Primary users - NEHSI Project Advisory Committee• Nigerian evaluation team• Outcome Harvesting approach with mentors
    9. 9. 9How? what made this work?• Intentional, proactive, facilitated• Negotiated process• Focus on users• Regional capacity of evaluators• Learning, experimental agenda• Accountability mechanism in-place• Use of structured frameworks & approaches• Use of mentors, technical expertise & guidance• Realistic, flexible timelines• Face tensions
    10. 10. Evaluation for Development:Matching to the Right Results and Approach
    11. 11. Metrics to evaluate research qualityPeer reviewed publicationsJournal ratingsCitation indices11… few journals for Southern research… monodisciplinary journals tend to havehigher impact factors… audiences for development research includepolicy makers, practitioners
    12. 12. 12ResearchExcellenceExternalPractice:LiteratureExternalPractice: R4DFundersExternalPractice: R4DGranteesInternalPractice: IDRCCentrally-organized process to define a results framework
    13. 13. Defining research excellence13ScientificmeritintegrityrelevanceUseInfluenceimpactinnovativestakeholderengagement
    14. 14. Field building: develop results from innovation in programs14programs1 2 3 4 5established research approach, methodsbodies of knowledgecapacity of researchers, networksproof of influenceongoing relationships with usersexternal validationmore, better coordinated fundingleadership developmentinternal comms, quality controldeveloping careers
    15. 15. How can we better match evaluation approaches tothe nature and goals of the research programming?• Complexity thinking• Equity focused• Feminist• Systems thinking15

    ×