Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Monitoring and Evaluation: General Principles and Practices

207 views

Published on

This seminar was designed for the senior management team at the ATA and policymakers in Ethiopia to introduce the merits of developing a Monitoring and Evaluation (M&E) system. The half-day seminar covered topics such as why, what, and how to monitor and evaluate programs and policies. It also covered how an M&E framework can be designed and implemented.

Published in: Government & Nonprofit
  • Be the first to comment

  • Be the first to like this

Monitoring and Evaluation: General Principles and Practices

  1. 1. Monitoring and Evaluation: General Principles and Practices SHAHID KHANDKER INTERNATIONAL FOOD POLICY RESEARCH INSTITUTE (IFPRI) D A T E
  2. 2. What Do We Expect from this Seminar? Discusses conceptual underpinning of M&E System and its application in practice Examine how an M&E framework can be developed and implemented Meet two objectives: (i) enhance understanding of designing and implementing M&E systems and (ii) demonstrate how this system can be developed by using the cluster-based program of ATA Five issues are covered: (1) Differences between M& E; (2) Essence of Monitoring; (3) Essence of Evaluation; (4) Differences between E and IE; and various IE methods; and (5) Discusses M&E and IE design of ATA’s cluster-based agricultural transformation. Discussion will be followed after presentation
  3. 3. What is M&E? Monitoring and evaluation are tools that make it possible to review and measure the results of projects, programs or policies.
  4. 4. Why Concern for M&E? To evaluate and adjust strategies and activities To report on progress to interested parties, clients, and the general public To identify and share with others best practices and lessons learned To improve the programming of new interventions and strategies
  5. 5. What is Monitoring Provides regular information on how things are working  Definition: A continuing function that uses: a. Systematic data collection and analysis of specific indicators of progress b. Provides management with indication of extent of progress towards goals • achievement of deliverables • use of resources c. Contributes to performance improvement and course correction d. Conducted by business unit
  6. 6. What is Evaluation Evaluation can only be done after a certain time and requires thorough investigations Conducted by independent evaluators ◦ Definition: A systematic and objective measurement of the results achieved by a project/program/policy in order to assess its relevance, efficiency of implementation, effectiveness, impact and sustainability.
  7. 7. Monitoring vs Evaluation  Monitoring assesses progress in implementation of ongoing programs  Evaluation provides a snapshot against some benchmarks at a point in time of programs that may or may not be continuing  Monitoring looks at progress relative to targets and assumes there is causality  Evaluation seeks to prove causality
  8. 8. Monitoring and Evaluation - Rationale Monitoring ◦ Holds implementers accountable for delivery of inputs ◦ Provides basis for corrective action ◦ Provides assessment of continued relevance Evaluation ◦ Accountability - was money well-spent? ◦ Did program have any causal effect on program’s objectives? ◦ Learning - what could we do better next time?
  9. 9. Monitoring and Evaluation, and the Chain of Results and an impact on development Monitoring Evaluation Resources are mobilized to undertake activities the direct results of which must have effects This chain is based on a series of logical relationships (if… then) called the logframe
  10. 10. Logframe of a Project or Program Resources are mobilised to undertake activities the direct results of which must have effects and impact on development building, training, delivery budget funds amount of services provided, processes completed health, literacy consumption, life expectancy, poverty Allocation Inputs Outputs Outcomes Impact
  11. 11. Purpose of an M&E Know, understand and learn lessons Manage, monitor DecidePeople in charge Manager, Operational staff Researcher Be informed Public, beneficiaries, contributors 4 main purposes
  12. 12. Key M&E Products Program Before During At the end Afterwards To check the design To improve implementation For accountability purposes To assess impact Start
  13. 13. Components of a M&E Strategy Outcome-based monitoring system and its proper implementation and evaluation Complemented with systematic and strategic impact evaluation Creating a feedback process Building capacity for monitoring and evaluation Promoting participation of policymakers for informed policy-making
  14. 14. Essence of Outcome-Based Monitoring
  15. 15. Why Monitoring? Monitoring is required for the following purposes: • Effective management • Policy transparency • Democratic accountability • Feasible target setting
  16. 16. Outcome-Based Monitoring System Setting goals and targets (including establishing the baseline) – where do we want to go? Identifying indicators that can be used to measure progress towards goals Collecting data - what progress is being made? Providing feedback for decision making – what needs to be changed along the way?
  17. 17. What to Monitor? Identify few indicators; measure them well; and use the results for policy makers A prioritized list of input, output, outcome and impact indicators for monitoring Develop a data collection system to institutionalize a monitoring system: budget and administrative data, facility and other survey data.
  18. 18. How to Monitor Manage a monitoring system that integrates a variety of different types of information: MIS, surveys and censuses, and participatory exercises Draw up a monitoring matrix that identifies: • Data sources for each indicator • Frequency of measurements • The organization responsible for collecting information Collect information in a sequence that maximizes the complementaries between different types of data Outputs: • Annual progress report • Database for continuous monitoring • Desk studies
  19. 19. Monitoring System: What Components? Actors: ◦ Data producers ◦ Analysts ◦ Data analysis users including decision makers and stakeholders Activities: ◦ Collecting and processing information ◦ Data analysis ◦ Dissemination and feedback
  20. 20. Monitoring System: Where to Start? Taking stock -Who are the actors? -What are their activities? -What is their capacity? -What are their roles and needs? Review the actions currently undertaken to build or re-enforce capacity Review of needs in information gaps Review of the institutional framework
  21. 21. Monitoring System: What Limitations? A series of issues to note: - Roles badly defined - Lack of coordination - Lack of reliability and relevance of information - Difficulty in accessing information - Long delays in production of information - Lack of use of the data by policymakers
  22. 22. Essence of Evaluation
  23. 23. Is There a Legitimate Need for an Evaluation? Is there an interest in: ◦ Accountability? ◦ Decision-Making? ◦ Learning? What information could an evaluation produce that we don’t have now? Are there faster, better, more cost-effective ways of improving a program than conducting an evaluation?
  24. 24. Plan an Evaluation Assess the stakeholder’s needs Prepare terms of reference to state: ◦ Issues ◦ Methodology/Approach ◦ Schedule and cost Prepare workplan to clarify: ◦ Questions/Issues ◦ Evaluation methodology
  25. 25. What to Evaluate An evaluation can be time- and resource-intensive Conduct a structured evaluation only for the following: ◦ A policy/program of strategic importance ◦ If an evaluation contributes to a knowledge gap of what works and what does not ◦ If the policy or program is innovative
  26. 26. How to Evaluate?
  27. 27. An Operational Evaluation Did the implementation of this program unfold as planned? How can you provide a valid answer to this question? 1) Identify what was planned in original documents Document Review Interviews with various people in charge to ensure good understanding of the situation Document review, review of archive files Interviews with people involved in the different modules and the different phases, and with different sensitivities 2) Reconstruct what happened in reality
  28. 28. An Operational Evaluation Did the implementation of this program unfold as planned? 3) Compare the plan with what happened, identify gaps, check that opinions collected on this issue lead to the same conclusions Preparation of a retrospective time chart Organization of the arguments Interviews with various stakeholders to check the validity of the conclusions reached 4) Decide what constitutes a minor change and a notable change. Flag changes that have had visible consequences on costs and delivery
  29. 29. Different Types of Evaluation Complement Each Other Operational evaluation should be part of a normal procedure (M&E) within the implementing agency The template is also very useful for a statistical impact evaluation We need to know the context in which our data was generated and where policy efforts were directed Essential for interpretation of results It is difficult to evaluate the final impact of a policy without knowing how it was implemented
  30. 30. Impact Evaluation Focused on Latter Stages of Logframe How do you measure the impact (effects) of a program? This is one of the key methodological issues for evaluations. The rationale of a program is to alter an outcome or impact from what it would have been without the program. Long-andshort-term impacts Impact Outcomes Outputs Inputs Allocation Purpose Objective
  31. 31. Impact Assessment Methods 1. To what extent is it possible to identify the effect (revenues increase, the prevalence of a disease goes down, etc.)? 2. To what extent can this effect be attributed to the program (and not to some other cause)? Solutions must be found for two problems: To find the best answers possible to these two questions, methods that are specific to impact evaluation should be used.
  32. 32. The Counterfactual What would have happened to the beneficiaries if the program had not existed? The evaluator’s key question: How do you rewrite history? How do you obtain baseline data?
  33. 33. The Only Solution Beyond Doubt: Experiment The ideal experiment Possible causes Studied groups Results of the observation (effects) All other influences The project or the policy Control group Experimental group Results for the control group Results for the experimental group Net results (attributable to the project or the policy)
  34. 34. The Ideal Experiment With equivalent control group. In theory, a single observation is not enough. Effect or impact = 13 Extreme care must be taken when selecting the control group to ensure comparability. Incomelevel Time Program 30 10 17 Beneficiaries Equivalent control group
  35. 35. The Ideal Experimentation oEthical problem (to condemn a group to not being beneficiaries) oDifficulties associated with finding an equivalent group outside the project oCosts With an equivalent control group In practice, it is extremely difficult to assemble an exactly comparable control group: Therefore, this solution is hardly ever used
  36. 36. Evaluation Using a Simple Post- Implementation Observation Impossible to reach a conclusion regarding the impact; possible to say whether objective has been reached, the result cannot be attributed to the program. Effect or impact ? Time Program 30 Revenuelevel Beneficiaries
  37. 37. Evaluation without a Comparison Group, Using a Before/After Comparison Findings on the impact lack precision and soundness. The time series make it possible to reach better conclusions. Effect or impact? Incomelevel Time Program 30 10 17 14 21 Base Line Study Time series ? Broad descriptive survey Beneficiaries
  38. 38. Comparison with a Non-Equivalent Control Group Effect or impact = 13 Projection for beneficiaries if no policy Time Program 30 10 17 Incomelevel 14 21 Beneficiaries Equivalent control group Data on before and after situations is needed.
  39. 39. Four Broad Categories of Evaluation: Methods - Quality + ++ +++ Equivalent control group (true experimentation) Non-equivalent control group Evaluation without a control group Evaluation by comparison with a control group Post - implementation observation Observation before and after implementation
  40. 40. Example: Ethiopian Cluster-Based Strategy for Agricultural Transformation Structure of Ethiopian Economy In 2012/13, Ethiopia’s economy consisted of ◦ Service -45% ◦ Industry-11% ◦ Agriculture-44% Target Structure of a Middle-Income Economy ◦ Service -40% ◦ Industry-40% ◦ Agriculture-20%
  41. 41. Objectives of Cluster-Based Strategy Agro-processing and value addition as an entry point to catalyze rural transformation Active agro-industrial policy to rapidly increase agro-processing and value addition Geographic cluster-based approach to implement multiple interventions Investment is $76 million per cluster for 16 clusters Promote private investment from 6.9% to a higher level and reduce public investment from 75% to a lower level
  42. 42. Clusters Approach Investment Program Rural financial services– Value chain financing and SME finance; increased savings mobilization; growth of mobile and branchless banking Rural infrastructure and urban-rural linkages– infrastructure investment; market-driven migration and urbanization Social development and environmental sustainability – Upgraded skill level through training and technology transfer; increase youth employment; improved health and nutrition via higher income
  43. 43. Multiple Interventions More specifically, interventions consisted of the following ◦ (1) Strengthen the enabling environment; ◦ (2) Develop contract farming; ◦ (3) Improve food safety; ◦ (4) Improve infrastructure; ◦ (5) Facilitate access to finance; ◦ (6) Expand access to broad set of inputs; ◦ (7) Enhance capacity building; and ◦ (8) Strengthen youth employment policies and strategies.
  44. 44. Private Investment and Regional Policy Increase commercial crop production Large-scale aggregation and storage Advanced processing, marketing, and export 16 clusters of 10-12 woredas identified from 5 regions as pilot projects Commodity-based cluster approach: ◦ Sesame (western Tigray) ◦ Maize (South west Amhara) ◦ Wheat & barley (Eastern Oromia) ◦ Coffee (Eastern SNNP) ◦ Livestock (Northern Livestock)
  45. 45. Financing Requirements $550m -$650m investment from government and development partners A matching amount from the private sector About $75 million investment per cluster (16 clusters) It consists of $30 million for basic infrastructure (road, electricity, water, irrigation and sewerage) $40 million for agro-processing and value addition infrastructure (general and cold storage, processing and packaging etc.) $2 million for capacity building (human and institutional)  $3 million for access to finance (input credit and value chain financing)
  46. 46. How to Design M&E Framework Develop the Logical framework or Log frame Input: Total cost by sector and crop for 16 clusters is $1,200 million for which development partners are expected to contribute 35% (out of 50% of government) plus 50% from private sector Output: Physical infrastructure and facilities created Outcome: Total revenue and jobs generated Impact: Extent of agricultural transformation; extent of rural industrialization and its impact on income and living in both rural and urban areas
  47. 47. M&E Strategy 1. Identify the indicators for monitoring and evaluation by crop and cluster 2. Develop timeline for program development and implementation 3. Identify instruments for data generation and processing before and after intervention within suitable time lag 4. Assess possible linkages between investment and indicators of development 5. Develop dissemination strategy to link with policymaking 6. Program officials take the lead for the M&E strategy 7. M&E feedback must be well-coordinated and timely 8. Policymaking must also be informed policymaking using M&E data
  48. 48. Impact Evaluation (IE) Strategy 1. Identify intervention instruments for rigorous IE 2. Determine interventions critical for follow-up/scale-up 3. Identify logical path for impact assessment 4. Use appropriate IE methods (RCT, PSM, DD, RD etc.) for intervention sought for IE 5. Third party to help design and implement IE strategy with help of program officials in charge of program 6. Critical to construct a counterfactual (what would have happened had the intervention/program not existed?)
  49. 49. One Possible IE Method is DD Approach… Randomly select 60 woredas from treated clusters and 40 from non-treated clusters Randomly select 20 communities from each Woreda and 20 households from each community (a total of 4,000 households sampled) Carry out both baseline (before treatment) and a follow-up (after 3 years) to crate a panel data set Apply a DD framework to estimate the benefits Carry out another follow-up after 5 years of program intervention to estimate long-term impacts
  50. 50. Concluding Remarks Monitoring must be focused, well-defined, and systematic Indicators of monitoring inputs, outputs, outcomes, and impacts must be smaller in number Monitoring is easier and less expensive than carrying out an impact evaluation Project/program level impact evaluation is easier than sector- or economy- wide impact evaluation IE helps determine whether and how programs matter, and requires planning and resource allocation for its better use.

×