• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Evaluation performance-monitoring
 

Evaluation performance-monitoring

on

  • 341 views

Performance monitoring and evaluation, high level

Performance monitoring and evaluation, high level

Statistics

Views

Total Views
341
Views on SlideShare
338
Embed Views
3

Actions

Likes
0
Downloads
3
Comments
0

2 Embeds 3

http://www.linkedin.com 2
http://www.slashdocs.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Evaluation performance-monitoring Evaluation performance-monitoring Document Transcript

    • Performance Monitoring and EvaluationRahul Bhargava27 May 2012Contents Context 1 Components for a functional M&E system 2 Methodology 4ContextMost progressive governments have institutionalized results-basedmanagement leading to performance-enhancement and effectivedelivery of progress and change. The objective of results basedmanagement is to “provide a coherent framework for strategic plan-ning and management based on learning and accountability in adecentralised environment.”1 Introducing a results-based approach 1 Note on Results Based Management,aims to improve management effectiveness and accountability by Operations Evaluation Department, World Bank, 1997“defining realistic expected results, monitoring progress toward theachievement of expected results, integrating lessons learned intomanagement decisions and reporting on performance.”2 2 “Results-based Management in Results Based Management at UNDP, for example, is based on Canadian International Development Agency”, CIDA, January 1999• the definition of strategic goals which provide a focus for action;• the specification of expected results which contribute to these goals and align programs, processes and resources behind them;• on-going monitoring and assessment of performance, integrating lessons learnt into future planning;• improved accountability, based on continuous feedback to improve performance Development programs and policies are designed to achieveoutcomes, for example, to raise incomes or improve agriculturalproductivity. Impact evaluations are a part of developing evidence-based policy. Outlining the Millennium Development Goals, ResultsFramework Documents and performance-pay incentives, make imple-menters focus on results that are set to be tracked internationally andnationally. These results are to be used to increase accountability, forbudgeting and informing policy. Monitoring and Evaluation is used to improve the quality, effi-ciency and effectiveness of interventions.
    • Performance Monitoring and Evaluation Center of Excellence Monitoring and evaluation (M&E) is key for the effective imple-mentation of results-based management. Within a results-orientedenvironment, the emphasis of M&E is on:• active application of monitoring and evaluation information to the continuous improvement of strategies, programs and other activities;• monitoring of substantive development results instead of just inputs and implementation processes;• monitoring and evaluation of results as they emerge instead of as an ex-post activity;• conducting monitoring and evaluation as joint exercises with Government departmentsComponents for a functional M&E systemThe World Bank identified twelve components of a working monitor-ing and evaluation system following international peer review. Thisapproach was formally adopted by UNAIDS and partners, for theirM&E capacity building efforts in 2007, to support the measurementand management of the HIV/AIDS epidemic. The twelve components of a functional M&E system are3 3 Goergens, Marelize and Kusek, Jody Zall. Making Monitoring and Evaluation Systems Work: A Capacity Development Tool Kit. World Bank Publications. 2010. 2
    • Performance Monitoring and Evaluation Center of Excellence 1. Structure and organisational alignment for M&E systems 6. Advocacy 2. Human communication capacity for and culture for M&E systems M&E systems 7. Routine 8. Periodic monitoring surveys 12. Using information to improve results 9. Databases 11. Evaluation useful to and research M&E systems 10. Supportive 5. Costed supervision and data auditing 3. M&E M&E work partnerships plans 4. M&E plans Components relating to “people, partnerships and planning”1. Structure and organizational alignment for M&E systems2. Human capacity for M&E systems3. M&E partnerships4. M&E plans5. Costed M&E work plans6. Advocacy, communication, and culture for M&E systems Components relating to “collecting, capturing and verifying data” 3
    • Performance Monitoring and Evaluation Center of Excellence 7. Routine monitoring 8. Periodic surveys 9. Databases useful to M&E systems10. Supportive supervision and data auditing11. Evaluation and research Final component about “using data for decision-making”12. Using information to improve results As suggested by the authors, these Components may be used as an organizing framework for planning a M&E system’s staff, resources, support and funding requirements. It may be used as a reference for conducting assessments of a national M&E system, akin to the RFD framework, such that individual components may be assessed and to divide responsibilities at a country level, as a framework within which all partners can work together. Methodology Monitoring is a continuous process that is used to inform program implementation and day-to-day management. It usually tracks per- formance against expected results, facilitates comparisons across programs and allows for the reviewing of trends over time. Inputs, activities, outputs and occasionally outcomes, such as toward na- tional and international development goals, are tracked. Evaluations, meanwhile, are periodic objective assessments of com- pleted projects, programs or policy. They set out to answer specific questions about design, implementation and results or outcomes. To justify them, programs should be, Innovative To test a novel approach; Replicable To decide on whether to scale up in a different setting, geography or context; Strategically relevant To review flagship initiatives; Untested Globally or in context; Influence policy Cost-effectiveness of programs can be determined following im- pact evaluations. Specifically, questions regarding the cost-benefit balance of a given program and comparisons of the cost-effectiveness 4
    • Performance Monitoring and Evaluation Center of Excellenceof implementation alternatives can be answered based on the evi-dence. Impact evaluations should be approached pragmatically, thatis, the methods should fit the operational context, not vice versa.This is achieved at the outset of programs, by designing prospectiveimpact evaluations into the project’s implementation. Evaluationdesigns that fit the political and operational context are as importantas the method itself. Where policy makers and civil society demandresults and accountability from public programs, impact evaluationsprovide credible evidence on performance and on whether a programachieved its desired outcome.4 4 Gertler, Paul (2010): Impact Evaluation in Practice. Herndon, VA, USA: World There are caveats, however. Often there is greater emphasis on Bank Publications.controlling inputs, say, funds utilized or literature distributed, thanon assessing whether a program has achieved a goal. Attribution is the hallmark of impact evaluations. They assess theimprovements in the well-being of persons that can be attributedto specific projects, programs or policy. It follows, that executedcorrectly, impact evaluations should be carried out within a logi-cal framework that set out causal pathways by which a programproduces outputs and influences outcomes.5 5 Ibid. For example, the Government of Mexico recognized the needto monitor and evaluate the roll out of the innovative conditionalcash transfer program called “Progresa” in the 1990s. Its objectivewas to provide short-term support to create incentives to invest inchildren’s human capital, primarily conditional on regular attendanceat school and visiting health centres. Impact evaluation was built intothe program’s scale-up and replication. External evaluators found,in 2001, that the program targeted the poor well, improved schoolenrollment by an average of 0.7 additional years of schooling, andbrought down illness by 23 percent among children and 19 percentfewer sick or disability days among adults. The program reducedthe probability of stunting by 1 centimeter per year for childrenbetween the ages of 12 and 36 months. The evidence contributedto the decision by the new administration, following a presidentialelection, to expand the program by proving upper-middle schoolscholarships and health programs for adolescents. Other socialassistance programs, such as a large and well-targeted tortilla subsidyprogram, were scaled back. 5