MONITORING AND
EVALUATION
BY
OLASHORE EMMANUEL
DEFINITIONS


Monitoring involves the routine collection of
data that measure progress toward achieving
programme objectives using record keeping
and regular reporting
EVALUATION






It is a collection of activities designed to
determine the value or worth of a specific
programme.
It focuses on measuring on whether planed
outcomes and impacts are achieved. To know
whether one is attaining the intended outcome
and impacts
The monitoring system must be in place, and
that is why M&E is always considered as one
compact entity.
DIFFERENCE BETWEEN MONITORING AND
EVALUATION

Features

Monitoring

Evaluation

Frequency

Regular, routine

Periodic either at the
beginning, middle or even
end

Measures input,

Measures outcome and

process and output

Focus

impact variables

variables

Those involved

To know the on going

To know the impact of

status of a program

Purpose

program

In house involving

External people especially

program planners

donors may want to come
for on the spot check and
triangulation of supplied
IMPORTANCE


To make informed decisions regarding on going programs



To ensure the most effective and efficient use of resources





To determine exactly whether a program is right on track and where
changes need to be considered
To help stakeholders conclude whether the program is a success

To inform subsequent decisions about programs
In essence, M&E used:
 as a management tool for planning & implementation
 as an advocacy tool to raise awareness/ convince stakeholders to
act
as an accountability tool for performance monitoring

What to monitor



INDICATOR

Is a key or pointer showing the direction
 Is a variable that measures one aspect of
a program or project that is directly related
to program objectives
 A statement that tells us what it is that will
be measured to determine whether or not
a result has been accomplished

CRITERIA FOR SELECTING
INDICATORS










Is the indicator linked to the expected result
Is the indicator defined in the same way over time
Are the data for the indicator collected in the
same way over time
Will data be available for the indicator
Is data collection cost effective
Is this indicator important to most people i.e. will
the data collected provide sufficient information to
both supporters and skeptics
Is the indicator measurable
QUALITIES OF INDICATOR






Measurable
Related to goal and objectives
Reliable (consistency)
Credible
Economical
TYPES OF INDICATORS





INPUT indicator
OUTPUT indicator
OUTCOME indicator
IMPACT indicator
OBSERVATION IS KEY IN
M&E
DATA








Data refers to raw observation. In its raw form,
it makes no sense but when processed, it
informs
The information derived from indicators and
recorded or reported is what is referred to as
data.
When programs are implemented through
series of activities, these activities are
captured in form of data.
It can be quantitative and qualitative.
QUALITIES OF DATA






Validity (internal and external)
Integrity
Timeliness
Accuracy (Precision)
Reliability


V-I-T-A-R
EXAMPLES OF DATA
COLLECTING TOOLS







Attendance sheet
Referral loop
Register
Pictures
Minutes
Report
M&E PLAN






Every project or intervention should have a
monitoring and evaluation (M&E) plan.
This is the fundamental document that details a
program’s objectives, the interventions developed
to achieve these objectives and describes the
procedures that will be implemented to determine
whether or not the objectives are met.
It shows how the expected results of a program
relate to its goals and objectives, describes the
data needed and how these data will be collected
and analyzed, how this information will be used,
the resources that will be needed, and how the
program will be accountable to stakeholders
…CONTINUED




M&E plans should be created during the
design phase of a program
Such a plan should be considered a living
document and revised whenever a program is
modified or new information is needed
Importance of M & E plans








State how a program will measure its
achievements and therefore provide
accountability;
Document consensus and provide
transparency
Guide the implementation of M&E activities in
a standardized and coordinated way
Preserve institutional memory
STEPS TO DEVELOP AN M&E
WORKPLAN












Identify program goals and objectives

Determine evaluation questions, indicators and their
feasibility
Design the methodology for monitoring the process
Resolve implementation issues: Who will perform the
work? How will existing data and past evaluation studies be
used
Identify internal and external evaluation resources and
capacity
Develop an M&E workplan matrix and timeline
M&E MATRIX
Program (Objectives)
Monitoring

Activities

Evaluation

Input

Outcome
(Shortterm
effects/res
ult at the
level of
goal)

Output
(Imme
diate
result
of 1st
level
activitie
s)

Key
responsi
ble
Impact Methodol
(Long- ogy
term
effects)

MOV

TIME
FRAME




VIELEN
DANK

THANK YOU VERY MUCH

Monitoring and evaluation by Olashore Emmanuel

  • 1.
  • 2.
    DEFINITIONS  Monitoring involves theroutine collection of data that measure progress toward achieving programme objectives using record keeping and regular reporting
  • 4.
    EVALUATION    It is acollection of activities designed to determine the value or worth of a specific programme. It focuses on measuring on whether planed outcomes and impacts are achieved. To know whether one is attaining the intended outcome and impacts The monitoring system must be in place, and that is why M&E is always considered as one compact entity.
  • 6.
    DIFFERENCE BETWEEN MONITORINGAND EVALUATION Features Monitoring Evaluation Frequency Regular, routine Periodic either at the beginning, middle or even end Measures input, Measures outcome and process and output Focus impact variables variables Those involved To know the on going To know the impact of status of a program Purpose program In house involving External people especially program planners donors may want to come for on the spot check and triangulation of supplied
  • 7.
    IMPORTANCE  To make informeddecisions regarding on going programs  To ensure the most effective and efficient use of resources   To determine exactly whether a program is right on track and where changes need to be considered To help stakeholders conclude whether the program is a success To inform subsequent decisions about programs In essence, M&E used:  as a management tool for planning & implementation  as an advocacy tool to raise awareness/ convince stakeholders to act as an accountability tool for performance monitoring 
  • 8.
    What to monitor  INDICATOR Isa key or pointer showing the direction  Is a variable that measures one aspect of a program or project that is directly related to program objectives  A statement that tells us what it is that will be measured to determine whether or not a result has been accomplished 
  • 9.
    CRITERIA FOR SELECTING INDICATORS        Isthe indicator linked to the expected result Is the indicator defined in the same way over time Are the data for the indicator collected in the same way over time Will data be available for the indicator Is data collection cost effective Is this indicator important to most people i.e. will the data collected provide sufficient information to both supporters and skeptics Is the indicator measurable
  • 10.
    QUALITIES OF INDICATOR      Measurable Relatedto goal and objectives Reliable (consistency) Credible Economical
  • 11.
    TYPES OF INDICATORS     INPUTindicator OUTPUT indicator OUTCOME indicator IMPACT indicator
  • 12.
  • 13.
    DATA     Data refers toraw observation. In its raw form, it makes no sense but when processed, it informs The information derived from indicators and recorded or reported is what is referred to as data. When programs are implemented through series of activities, these activities are captured in form of data. It can be quantitative and qualitative.
  • 14.
    QUALITIES OF DATA      Validity(internal and external) Integrity Timeliness Accuracy (Precision) Reliability  V-I-T-A-R
  • 15.
    EXAMPLES OF DATA COLLECTINGTOOLS       Attendance sheet Referral loop Register Pictures Minutes Report
  • 16.
    M&E PLAN    Every projector intervention should have a monitoring and evaluation (M&E) plan. This is the fundamental document that details a program’s objectives, the interventions developed to achieve these objectives and describes the procedures that will be implemented to determine whether or not the objectives are met. It shows how the expected results of a program relate to its goals and objectives, describes the data needed and how these data will be collected and analyzed, how this information will be used, the resources that will be needed, and how the program will be accountable to stakeholders
  • 17.
    …CONTINUED   M&E plans shouldbe created during the design phase of a program Such a plan should be considered a living document and revised whenever a program is modified or new information is needed
  • 18.
    Importance of M& E plans     State how a program will measure its achievements and therefore provide accountability; Document consensus and provide transparency Guide the implementation of M&E activities in a standardized and coordinated way Preserve institutional memory
  • 19.
    STEPS TO DEVELOPAN M&E WORKPLAN       Identify program goals and objectives Determine evaluation questions, indicators and their feasibility Design the methodology for monitoring the process Resolve implementation issues: Who will perform the work? How will existing data and past evaluation studies be used Identify internal and external evaluation resources and capacity Develop an M&E workplan matrix and timeline
  • 20.
    M&E MATRIX Program (Objectives) Monitoring Activities Evaluation Input Outcome (Shortterm effects/res ultat the level of goal) Output (Imme diate result of 1st level activitie s) Key responsi ble Impact Methodol (Long- ogy term effects) MOV TIME FRAME
  • 21.