Capacity Development For Monitoring And Evaluation

  • 3,766 views
Uploaded on

Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of …

Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency, and the nature, distribution, and sustainability of impact.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
  • Very useful information.It has broadened my knowledge and understanding. Thanks.
    Are you sure you want to
    Your message goes here
  • It is for my review of monitoring and evaluation. Thank you for sharing.
    Are you sure you want to
    Your message goes here
  • it is very useful. thanks for sharing
    Are you sure you want to
    Your message goes here
No Downloads

Views

Total Views
3,766
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
413
Comments
3
Likes
2

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • The basic element of monitoring and evaluation is that both activities hinge on the availability of a results chain. This simplified graphic shows that desired results are not activities but that they depend on activities being carried out. Therefore, good results chains are causally linked. There should be clear “if…then” connections. Good results boxes should also demonstrate change. Each box should describe how one hopes the relevant factor will change. They should be reasonably complete. And they should be simple.
  • We monitor to track inputs and outputs and compares them to (i) plan, identify and address problems; (ii) ensure effective use of resources; (iii) enhance quality and learning to improve activities and services; (iv) strengthen accountability; (v) provide a management tool; (vi) influence future decisions; and (vii) provide data for evaluation of a development intervention .
  • We evaluate to determine effectiveness, show impact, strengthen financial responses and accountability, promote a learning culture focused on service improvement, and encourage replication of successful development interventions.
  • Leveraging monitoring and evaluation, we are now in a position to ensure systematic reporting, communicate results and accountability, measure efficiency and effectiveness, provide information for improved decision making, optimize allocation of resources, and promote continuous learning and improvement.
  • We must recognize, however, that the degree of control over deliverables decreases as we move up the results chain and that, in parallel, the challenge of monitoring and evaluation increases as we do so.
  • Notwithstanding, monitoring and evaluation can make a powerful contribution. To recap, there must be a clear statement of a measurable objective; structured indicators for inputs, activities, outputs, outcome, and impact; baselines and a means to compare against targets; and mechanisms for reporting and use of results in decision making. Where applicable, one should build in this a framework and methodology capable of establishing causation. In short, effective monitoring and evaluation systems must be developed if we are to manage for development results and effectiveness.
  • Beginning 1990, ADB’s approach to evaluation capacity development has reflected corporate shifts from benefit monitoring and evaluation and postevaluation to a framework embracing the project cycle. In the first phase, seven small-scale technical assistance projects built postevaluation capability (i.e., in Bangladesh, the People’s Republic of China, Nepal, Papua New Guinea, Philippines, Sri Lanka, and Thailand). In the second phase, five technical assistance projects established project performance management systems (i.e., in the People’s Republic of China, Nepal, Philippines, Sri Lanka, and Thailand). In the third phase, two technical assistance projects built monitoring and evaluation systems (i.e., in the People’s Republic of China and Philippines).
  • Other lessons were incorporated in the design of the regional technical assistance I am about to outline. They were: Monitoring and evaluation systems are a means to an end—benefits are obtained when results are used in decision making. It is advisable to locate responsibility for monitoring and evaluation near the capable head of an organization. Monitoring and evaluation systems should not become too complex or resource-intensive. Monitoring and evaluation systems encompass data collection in the field and aggregation and analysis by end users. Evaluation capacity development that concentrates on the oversight agency carries the risk that other entities may lack incentives to provide data and information. Case studies help to develop staff competency and confidence.
  • The Third International Roundtable on Managing for Development Results held in February 2007 in Hanoi was a milestone for aid effectiveness. It focused on building the capacity of countries to manage for results and develop country-level and regional action plans. The priority on evaluation capacity development is reflected in demand for knowledge products and services, e.g., those offered by IPDET, and the growth of evaluation associations. In September 2007, ADB approved the first of a multiyear, integrating instrument to develop regional capacity for monitoring and evaluation. The technical assistance is financed by the Government of the People’s Republic of China.
  • The countries involved in the first technical assistance are Cambodia, the Lao People's Democratic Republic, and Viet Nam. One set of activities toward the first output will raise proficiency with regional and national training-of-trainers in tools, methods, and approaches for monitoring and evaluation. SHIPDET is the regional training input to this. Another set will relate to consulting inputs in the field of strategy and policy formulation, including international training at IPDET. The second output will be accomplished by extensive support to formulation of country strategies for monitoring and evaluation, conducted in-country and in sequential fashion, feeding the justification, research, and analysis of options for a strategic direction for evaluation capacity development by ADB and its delivery design.
  • Activities toward the third output will enhance selected knowledge sharing and learning platforms, extend to evaluation agency staff advice on new and existing knowledge networks on monitoring and evaluation, and promote and conclude partnership arrangements with interested evaluation associations.
  • Assumptions and risks identify conditions, external to a development intervention, that are needed to ensure that one level of performance indeed causes the next level of performance to happen. In successive slides, here are those identified for the technical assistance at the output, outcome, and impact levels.
  • Monitoring assumptions and risks is critical to project success. The environment is continually influencing the cause-effect hypotheses on which the development intervention is built. Implementers of a development intervention must ensure that such hypotheses continue to remain valid. Monitoring should be built into the intervention’s performance monitoring and management system. The performance indicators should be regularly monitored, and the assumptions on which they are built should be frequently checked and verified.
  • Given that planners must make assumptions at the design stage, the implementation arrangements for a development intervention must allow for incorrect assumptions. This can be done by highlighting key assumptions to monitor during the course of the project, suggesting ways of ensuring that an assumption turns out to be correct, indicating how the validity of the assumption can be monitored, and suggesting what action to take if an assumption is proving invalid.
  • This graphic illustrates the implementation and partnership arrangements for the technical assistance. The Center for Development and Research in Evaluation, Malaysia, represented here today, will coordinate, supervise, and monitor overall activities. It specializes in capacity development for public sector monitoring and evaluation and integrated performance management and has extensive international and regional experience, expertise, capacity, and commitment.
  • The assistance has a focus on countries of the Greater Mekong Subregion, which are at the forefront of ADB's work on regional cooperation and integration. With us today are representatives of key policy entities selected by the countries themselves. We also have a representative from the Central Asian republics, who we hope can provide an early link to a second phase to this technical assistance. Participants were selected on the basis of their being able to act as focal or resource persons for 3–5 years. We hope that this will indeed be the case.
  • The technical assistance will be implemented over two years from October 2007. Here is its indicative activities schedule.
  • The technical assistance will be implemented over two years from October 2007. Here is its indicative activities schedule.
  • The technical assistance will be implemented over two years from October 2007. Here is its indicative activities schedule.

Transcript

  • 1. Capacity Development for Monitoring and Evaluation Olivier Serrat Asian Development Bank
  • 2.
    • The Results Chain
    Monitoring and Evaluation Outputs Outcome Impact Activities Inputs
  • 3.
    • What is Monitoring?
    • Monitoring is the continuous collection of data and information on specified indicators to assess the implementation of a development intervention in relation to activity schedules and expenditure of allocated funds, and progress and achievements in relation to its intended outcome.
    • Monitoring
      • involves day-to-day follow-up of activities during implementation to measure progress and identify deviations
      • requires routine follow-up to ensure activities are proceeding as planned and are on schedule
      • needs continuous assessment of activities and results
      • answers the question, “what are we doing?”
    Monitoring and Evaluation
  • 4.
    • What is Evaluation?
    • Evaluation is the periodic assessment of the design implementation, outcome, and impact of a development intervention. It should assess the relevance and achievement of the intended outcome, and implementation performance in terms of effectiveness and efficiency , and the nature, distribution, and sustainability of impact.
    • Evaluation
      • is a systematic way of learning from experience to improve current activities and promote better planning for future action
      • is designed specifically with the intention to attribute changes to the intervention itself
      • answers the question, “what have we achieved and what impact have we had?”
    Monitoring and Evaluation
  • 5.
    • The Results Chain Explained
    Monitoring and Evaluation Relevance Efficiency Effectiveness Sustainability Needs Objective Inputs Activities Outputs Outcome Impact
  • 6.
    • Challenges and Limits to Management
    Monitoring and Evaluation Inputs Activities What is within the direct control of the development intervention’s management Outputs What the development intervention can be expected to achieve and be accountable for Outcome What the development intervention is expected to contribute to Impact Challenge of Monitoring and Evaluation Degree of Control Logic Decreasing Control Increasing Difficulty
  • 7.
    • Life Cycle of Monitoring and Evaluation
    Monitoring and Evaluation Key: EA = ex-ante, MT = mid-term, EP = ex-post EA MT EP EA MT EP EA MT EP Intervention Period (e.g., 5 years) Intervention Period (e.g., 5 years) Intervention Period (e.g., 5 years)
  • 8. Evaluation Capacity Development History of ADB Support Build Post-Evaluation Capability 1990-1994 1999-To Date 1995-1998 Build Monitoring and Evaluation Systems Establish Performance Management Systems
  • 9. Evaluation Capacity Development Lessons of Experience The preconditions to success of evaluation capacity development are substantive government demand , existence of a mandate by decree for evaluation , and stability in staffing such that a very high proportion of trained personnel remain in tasks for which they were trained. Lesson 5 Lesson 4 Lesson 3 Lesson 2 Lesson 1
  • 10. Regional Technical Assistance
    • Impact
    • Higher efficiency and effectiveness in providing public sector services, leading to poverty reduction
    • Outcome
    • Improved ranges of skills, resources, systems, and attitudes for performance of results-based monitoring and evaluation of country partnership strategies, sector strategies, policies, programs, and projects in developing member countries of the Greater Mekong Subregion
    Outputs Outcome Impact
  • 11. Outputs Key: M&E = monitoring and evaluation, ECD = evaluation capacity development Proficiency in M&E Tools, Methods, & Approaches for M&E Strategy & Policy Formulation for M&E Research & Special Studies for M&E Country Strategies for M&E A Strategy for ECD
  • 12. Outputs Key: M&E = monitoring and evaluation Knowledge Sharing & Learning for M&E Knowledge Sharing & Learning Platforms Knowledge Networks Partnership Arrangements with Evaluation Associations
  • 13. Assumptions and Risks Output Level
    • ADB
    • The indicative activities and staffing schedule is too tight to permit productive sequencing of key activities
    • ADB
    • Technical assistance activities integrate the chief lessons learned from past evaluations
    • Appropriate, integrated training programs can be planned, designed, or identified; and synergetic effects can be achieved
    • Training is conducted well and according to realistic schedules
    • The consultants and the selected evaluation agency staff coordinate activities effectively
    • The consultants have client management skills
    • The consultants and the selected evaluation agency staff maintain clear roles, responsibilities, and deadlines
    • Clients
    • Evaluation agency staff are available to be trained
    Risks Assumptions
  • 14. Assumptions and Risks Outcome Level
    • Clients
    • Evaluation agencies underestimate the importance of national ownership and leadership of the evaluation process and of building national monitoring and evaluation capacities
    • Clients
    • Basic capacity exists and can be mobilized
    • The role and use of monitoring and evaluation in support of practices of knowledge management are understood
    • The funding agency has a clear vision about the intended outcome of the technical assistance and how it is to be achieved
    Risks Assumptions
  • 15. Assumptions and Risks Impact Level
    • Clients
    • Lack of human security; armed conflict; economic policies that discourage pro-poor growth; weak scrutiny by the legislative branch of the executive branch; ineffective voice of intended beneficiaries; and corruption, clientelism, or patrimonialism do not provide a broadly enabling environment for monitoring and evaluation
    • Fragmented government with poor overall capacity; absent, noncredible, and/or rapidly changing policies; unpredictable, unbalanced, or inflexible funding and staffing; poor public service conditions; segmented and compartmentalized organizations; or insufficient commitment to an evaluation culture do not conduce to government effectiveness
    • ADB
    • Able to plan and support evaluation capacity development in line with international best practice
    • Clients
    • Provide visible leadership, promote clear sense of mission, encourage participation, and establish explicit expectations on performance and rewards
    • Strategically approach change management and manage it proactively
    • Involve a critical mass of staff
    • Try, test, and adapt organizational innovations
    • Celebrate quick wins
    Risks Assumptions
  • 16. Technical Assistance Partnerships ADB Center for Development and Research in Evaluation International Program for Development Evaluation Training Asia-Pacific Finance and Development Center Regional Cooperation and Poverty Reduction Fund of the People’s Republic of China
  • 17. Selected Evaluation Agencies Tajikistan Amirov Fakhriddin K. , State Budget Department Ministry of Finance Lao PDR Vixay Xaovana , Committee for Planning and Investment Akhom Praseuth , Bank of Lao PDR Bounthay Leuangvilay , Budget Department, Ministry of Finance Cambodia Hou Taing Eng , Ministry of Planning Im Sour , Cambodian Rehabilitation and Development Board Suon Sophal , Cambodian Investment Board Lors Pinit , Department of Investment and Cooperation Hay Sovuthea , Supreme National Economic Council Malaysia Arunaselam Rasappan , Center for Development and Research in Evaluation Mariappan Mahalingam , Center for Development and Research in Evaluation Viet Nam Tran Ngoc Lan , Ministry of Planning and Investment Nguyen Dang Binh , Ministry of Planning and Investment Pham Thai Linh , Ministry of Natural Resources and Environment Nguyen Trang Thu , National Academy of Public Administration
  • 18. Technical Assistance Schedule
  • 19. Deliverables Training & Capacity Building Research & Special Studies Knowledge Sharing & Networking Strategic Direction for Evaluation Capacity Development Strengthened Evaluation Capacity Improved Service Delivery Leading to Poverty Reduction http://www.adb.org/evaluation
  • 20. Training Strategy Stage A1 April 2008 Introductory Training in M&E for Country Trainers (ToT) CeDRE In-Country Preliminary Preparatory Work by ToT Trainees In-Country Stage A2 Apr-Sept 08 Policy Level M&E Training (Round 1) IPDET 1 Stage B1 June 2008 Intermediate M&E Training for Trainees LAO PDR Stage A3 Oct. 2008 In-Country Stage 1 Down-line Training by ToT Trainees In-Country Stage A4 Nov 08-Mar 09 Advanced Level M&E Training for Trainees Cambodia Stage A5 Apr 2009 Policy Level M&E Training (Round 2) IPDET 2 Stage B2 June 2009 In-Country Stage 2 Down-line Training by ToT Trainees In-Country Stage A6 Jul 09-Sept 09 Final Wrap-Up Training & Certification of Country M&E Trainers AFDC/ADB Stage A7 Oct. 2009