2. Introduction of Evaluation
Evaluation is a systematic and purposeful undertaking carried out by internal or
external evaluators to appraise the relevance, efficiency, effectiveness of, as well
as the impacts and sustainability generated by the plans, policies, programmes
and projects under implementation.
The main objective of evaluation is;
ā¢ to draw lessons from the strengths and weaknesses experienced in the
implementation of plans, policies, programmes and projects so as to improve
their design and implementation in the future as well as to hold the officials
and agencies involved in the process accountable for its implementation and
results.
3. Why do we need evaluation?
ā¢ We need to have confidence that what we are doing is of value and to learn
how to do it better
ā¢ Evaluation has shown us that some of our āgood ideasā for interventions
donāt work or are counter productive
ā¢ Evidence of effectiveness is becoming increasingly important for getting new
interventions accepted and resources allocated to them
ā¢ Evaluation is crucial if we are to advance the services we provide for
survivors and to convince policy makers and funders of the value of our work
4. Monitoring and evaluation during Project Period
Ex-ante
evaluation
Baseline
Survey
Selection Stage Implementation Stage Operation Stage
Mid-term
Evaluation
Terminal
Evaluation
Ex-Post
Evaluation
Continuous Monitoring Sustainability Monitoring
Flow of
benefits
5. What needs to be evaluated?
ā¢ We need to understand the value and effectiveness of all of the current
components of health status
ā¢ We need to evaluate / have evaluated new interventions
ā¢ What is a new intervention? ā Changes to policy, training programmes,
advocacy/education programmes, information leaflets, changes in staffing,
counselling approaches, treatments such as post-exposure prophylaxis etc.
6. Using evaluation to develop an evidence base
ā¢ Evaluations can be thought of as falling into two classes: formative
and summative
ā¢ Formative evaluation is a method for judging the worth of a program
while the program activities are forming (in progress). They can be
conducted during any phase. This part of the evaluation focuses on
the process
ā¢ Summative evaluation is a method of judging the worth of a program
at the end of the program activities (summation). The focus is on the
outcome.
7. Formative evaluation
ā¢ This will usually involve multiple iterations of testing and often the use of a
range of methods
ā¢ Successful interventions, whether behavioural or biomedical, always have
theoretical bases and are built on previous research
ā¢ This needs to be articulated and a mapping exercise should be undertaken
to address the question āwhat do I need to know in order to do this well?ā
ā¢ If the knowledge base has gaps it may be important to conduct more basic
research before developing interventions
ā¢ Sometimes we wonāt know everything when we develop interventions, but
it is very helpful to have mapped out gaps as we may be able to addressing
then the course of the formative research or in parallel studies
8. Formative evaluation
ā¢ Qualitative research is particularly valuable in the first stages of
formative evaluation because it enables to learn the unexpected
ā¢ Very often we initially test information and behavioural type of
interventions by exposing a small group of people to them and then
gathering feelings, reactions, responses, initial feedback etc using
qualitative methods
ā¢ Usually this qualitative information can be collected by skilled note
taking, processed rapidly and used to inform a next draft of the
intervention
9. Types and Uses of Evaluation
Evaluation Types When to use What it shows Why it is useful
Formative Evaluation
Assessment Needs
Assessment
ā¢ During the development of
a new program.
ā¢ When an existing program is
being modified or is being
used in a new setting or with a
new population.
ā¢ Whether the proposed program
elements are likely to be needed,
understood, and accepted by the
population we want to reach.
ā¢ The extent to which an
evaluation is possible, based on
the goals and objectives.
ā¢ It allows for modifications to
be made to the plan before full
implementation begins.
ā¢ Maximizes the likelihood that
the program will succeed.
Process Evaluation
Program Monitoring
ā¢ As soon as program
implementation begins.
ā¢ During operation of an
existing program.
How well the program is working.
ā¢ The extent to which the program
is being implemented as designed.
ā¢ Whether the program is
accessible an acceptable to its
target population.
ā¢ Provides an early warning for
any problems that may occur.
ā¢ Allows programs to monitor
how well their program plans
and activities are working.
Outcome Evaluation
Objectives-Based
Evaluation
ā¢ After the program has
made contact with at least
one person or group in the
target population
ā¢ The degree to which the program
is having an effect on the target
populationās behaviors.
ā¢ Tells whether the program is
being effective in meeting itās
objectives.
10. Types and Uses of Evaluation
Evaluation Types When to use What it shows Why it is useful
Economic Evaluation:
Cost Analysis, Cost-
Effectiveness
Evaluation, Cost-
Benefit Analysis,
Cost-Utility Analysis
ā¢ At the beginning of a
program.
ā¢ During the operation of an
existing program.
What resources are being used in a
program and their costs (direct and
indirect) compared to outcomes.
ā¢ Provides program managers
and funders a way to assess cost
relative to effects.
Impact Evaluation ā¢ During the operation of an
existing program at
appropriate intervals.
ā¢ At the end of a program.
ā¢ The degree to which the program
meets its ultimate goal on an
overall rate of the program
intervention/ disease transmission
ā¢ Provides evidence for use in
policy and funding decisions
11. Different Approaches of Evaluation
Appreciative Inquiry: A strengths-based approach designed to support ongoing
learning and adaptation by identifying and investigating outlier examples of good
practice and ways of increasing their frequency.
Beneficiary Assessment: An approach that focuses on assessing the value of an
intervention as perceived by the (intended) beneficiaries, thereby aiming to give
voice to their priorities and concerns.
Case study: A research design that focuses on understanding a unit (person, site or
project) in its context, which can use a combination of qualitative and quantitative
data.
Causal Link Monitoring: An approach designed to support ongoing learning and
adaptation, which identifies the processes required to achieve desired results, and
then observes whether those processes take place, and how.
Collaborative Outcomes Reporting: An impact evaluation approach based on
contribution analysis, with the addition of processes for expert review and
community review of evidence and conclusions.
12. Different Approaches of Evaluation
Contribution Analysis: An impact evaluation approach that iteratively maps available
evidence against a theory of change, then identifies and addresses challenges to causal
inference.
Democratic Evaluation: Various ways of doing evaluation in ways that support
democratic decision making, accountability and/or capacity.
Developmental Evaluation: An approach designed to support ongoing learning and
adaptation, through iterative, embedded evaluation.
Empowerment Evaluation: A participatory approach designed to provide groups with
the tools and knowledge so they can monitor and evaluate their own performance.
Horizontal Evaluation: An approach to learning and improvement that combines self-
assessment by local participants and external review by peers
13. Different Approaches of Evaluation
Innovation History: A particular type of case study used to jointly develop an agreed
narrative of how an innovation was developed, including key contributors and
processes, to inform future innovation effortsā.
Institutional Histories: A particular type of case study used to create a narrative of
how institutional arrangements have evolved over time and have created and
contributed to more effective ways to achieve project or program goalsā.
Most Significant Change: Approach primarily intended to clarify differences in values
among stakeholders by collecting and collectively analyzing personal accounts of
change.
Outcome Harvesting: An impact evaluation approach suitable for retrospectively
identifying emergent impacts by collecting evidence of what has changed and, then,
working backwards, determining whether and how an intervention has contributed to
these changes.
14. Different Approaches of Evaluation
Outcome Mapping: An impact evaluation approach which unpacks an initiativeās
theory of change, provides a framework to collect data on immediate, basic changes
that lead to longer, more transformative change, and allows for the plausible
assessment of the initiativeās contribution to results via āboundary partnersā.
Participatory Evaluation: A range of approaches that engage stakeholders (especially
intended beneficiaries) in conducting the evaluation and/or making decisions about
the evaluation.
Participatory Rural Appraisal (PRA) / Participatory Learning for Action (PLA): A
participatory approach which enables people to analyze their own health status,
develop a common perspective on health care delivery units and benefits of health
programs like vaccination, deworming etc
Positive Deviance: A strengths-based approach to learning and improvement that
involves intended evaluation users in identifying āoutliersā ā those with exceptionally
good outcomes - and understanding how they have achieved these.
15. Different Approaches of Evaluation
Qualitative Impact Assessment Protocol (QUIP): An impact evaluation approach without a control
group that uses narrative causal statements elicited directly from intended project beneficiaries.
Randomised Controlled Trials (RCT): An impact evaluation approach that compares results between a
randomly assigned control group and experimental group or groups to produce an estimate of the
mean net impact of an intervention.
Realist Evaluation: An approach especially to impact evaluation which examines what works for
whom in what circumstances through what causal mechanisms, including changes in the reasoning
and resources of participants.
Social Return on Investment (SROI): An participatory approach to value-for-money evaluation that
identifies a broad range of social outcomes, not only the direct outcomes for the intended
beneficiaries of an intervention.
Success Case Method: An impact evaluation approach based on identifying and investigating the most
successful cases and seeing if their results can justify the cost of the intervention (such as a training
course)
Utilization-Focused Evaluation: Uses the intended uses of the evaluation by its primary intended
users to guide decisions about how an evaluation should be conducted.
16. Evaluation of Medical Technology
With respect to the evaluation of medical technologies (medicines,
vaccines, etc), there is a general consensus that RCTs are the gold
reference standard; however, this consensus is commonly extrapolated
to the idea that, just as medical interventions, public health ones not
submitted to randomized trials are unworthy of consideration as such,
and it is recommended: āto reject the scientific double standard of
what constitutes acceptable evidence of efficacy for clinical versus
public health interventionsā
17.
18. Efficacy, Effectiveness and Efficiency
ā¢ Health Services are evaluated by efficacy, effectiveness and efficiency
Efficacy: The ability to produce a desired or intended result is efficacy. Efficacy
is a measure in a situation in which all conditions are controlled to maximize
the effect of the agent.
Effectiveness: The degree to which something is successful in producing a
desired result; success is effectiveness. For example; the effectiveness of the
treatment, Effectiveness of vaccine etc
Efficiency: the ratio of the useful work performed by an agent or
medicine/vaccine is efficiency. For example; treatment of an specific disease by
a medicine with less side effects in cheaper price within less duration/dose
19. Characteristics of good evaluation
Good evaluation is inclusive.
ā¢ It ensures that diverse viewpoints are taken into account and that results are as complete
and unbiased as possible.
Good evaluation is honest.
ā¢ Evaluation results are likely to suggest that the program has strengths as well as
limitations.
Good evaluation is replicable and its methods are as rigorous as circumstances allow.
ā¢ A good evaluation is one that is likely to be replicable, meaning that someone else should
be able to conduct the same evaluation and get the same results. The higher the quality
of evaluation design, its data collection methods and its data analysis, the more accurate
its conclusions and the more confident others will be in its findings.
21. Different between Monitoring and Evaluation
Monitoring Evaluation
Monitoring is the systematic and routine collection of
information about the programs/projects activities
Evaluation is the periodic assessment of the
programs/projects activities
It is ongoing process which is done to see if things/activities
are going on track or not i.e. it regularly tracks the program
It is done on a periodic basis to measure the success against
the objective i.e. it is an in-depth assessment of the program
Monitoring is to be done starting from the initial stage of the
projects
Evaluation is to be done after certain point of time of the
project, usually at the mid of the project, completion of the
project or while moving from one stage to another stage of
the projects/programs
Monitoring is done usually by the internal members of the
team
Evaluation is mainly done by the external members. However,
sometimes it may be also done by internal members of the
team or by both internal and external members in a
combined way
Monitoring provides information about the current status
and thus helps to take immediate remedial actions, if
necessary
Evaluation provides recommendations, information for long
term planning and lessons for organizational growth and
success
22. Different Between Research and Evaluation
Research Evaluation
Produces generalizable
knowledge
Judges merit or worth
Scientific inquiry based on
intellectual curiosity
Policy and program interests
of stakeholders paramount
Advances broad knowledge
and theory
Provides information for
decision-making on specific
program
Controlled setting Conducted within settings of
changing actors, priorities,
resources and timelines
23. Health Program evaluation bodies in Nepal
ā¢ National Planning Commisison_Apex body
ā¢ Nepal Health Research Council āScientific research and evaluation of
the program and project
ā¢ Ministry of Health and population/ Policy, Planning and Monitoring
Division
ā¢ Department of Health Services-Health Management Information
System (HMIS)
ā¢ Provincial and local governments
24. Health program monitoring and evaluation tools in Nepal
ā¢ Health Management Information System (HMIS)-monthly monitoring
and end-year evaluation of the health program
ā¢ Nepal Demographic and Health Survey (NDHS)-every five years
ā¢ Census- every ten years
ā¢ Program specific survey and research