SlideShare a Scribd company logo
1 of 171
M&E
Definitions
Note: The extent to which other interventions (particularly policies) support or undermine the intervention and vice versa. This includes internal coherence
and external coherence. Internal coherence addresses the synergies and interlinkages between the intervention and other interventions carried out by the
same institution/government, as well as the consistency of the intervention with the relevant international norms and standards to which that
institution/government adheres. External coherence considers the consistency of the intervention with other actors’ interventions in the same context. This
includes complementarity, harmonisation and co-ordination with others, and the extent to which the intervention is adding value while avoiding duplication
of effort.
Coherence is connected in particular with relevance, effectiveness and impact.
While relevance assesses the intervention at the level of the needs and priorities of the stakeholders and beneficiaries that are directly involved, coherence goes up to the next
level and looks at the fit of the intervention within the broader system. Both relevance and coherence consider how the intervention aligns with the context, but they do so from
different perspectives.
Coherence is often a useful angle through which to begin examining unintended effects, which can be captured under effectiveness and impact. While the intervention may
achieve its objectives (effectiveness) these gains may be reversed by other (not coherent) interventions in the context.
Likewise there are links with efficiency: incoherent interventions may be duplicative, thus wasting resources.
Challenges of evaluating coherence and how to address them: The table identifies several of the key challenges when evaluating coherence – including challenges related to
breadth of scope, mandate and data availability. Suggestions are made for ways of addressing them for both evaluators and evaluation managers.
Coherence added because – to better capture linkages, systems thinking, partnership dynamics, and complexity.
(Eng meaning: the quality of being logical and consistent. 2. the quality of forming a unified whole
Monitoring
• observe and check the progress or quality of (something)…
• Monitoring is the continuous process of collecting and analyzing data (indicators),
with a view to identifying any need for corrective actions to ensure Project
execution towards attaining its Objective.
• A continuing function that uses systematic collection of data on specific indicators
to provide management and the main stakeholders of an ongoing development
intervention with indications of the extent of progress and achievement of
objectives and progress on the use of allocated funds. (DAC)
• The basis for a quality management system is the Plan, Lead ,Do, Check, Act
concept, where check is the monitoring part of the concept.
• Monitoring is the systematic and continuous collecting, analysis and using
of information for the purpose of management and decision-making, it
therefore represents an exhaustive and regular examination of the
resources, outputs and results of a project, it is a continuous process
carried out during the execution of a project with the intention of
immediately correcting any deviation from operational objectives.
Monitoring generates data that can be used in evaluations.
• Monitoring: Regularly collecting, reviewing, reporting and acting on
information about project implementation. Monitoring is generally used to
check our performance against ‘targets’ as well as to ensure compliance
with donor regulations. (Mercy Corps DM&E Guidelines).
Evaluation
Evaluation is a periodic assessment of the efficiency, effectiveness,
impact, sustainability and relevance of a project in the context of stated
objectives. It is usually undertaken as an independent examination of the
background, objectives, results, activities and means deployed, with a view
to drawing lessons that may guide future decision-making, it therefore seeks
to determine as systematically and objectively as possible the relevance,
efficiency and effect of a project in terms of its objectives.
• Evaluation is an in-depth, retrospective analysis of a specific aspect (or
aspects) of a project that occurs at a single point in time. Evaluation is
generally more focused and intense than monitoring and often uses more
time-consuming techniques such as surveys, focus groups, interviews and
workshops.
Evaluation
• The systematic and objective assessment of an on-going or completed project,
programme or policy, its design, implementation and results. The aim is to determine the
relevance and fulfillment of objectives, development efficiency, effectiveness, impact and
sustainability. An evaluation should provide information that is credible and useful,
enabling the incorporation of lessons learned into the decision–making process of both
recipients and donors. Evaluation also refers to the process of determining the worth or
significance of an activity, policy or program. An assessment, as systematic and objective
as possible, of a planned, on-going, or completed development intervention.
Note: Evaluation in some instances Involves the definition of appropriate standards, the
examination of performance against those standards, an assessment of actual and
expected results and the identification of relevant lessons. (DAC)
• Evaluations tell us if our projects are achieving our goals, reaching our target groups, if
we’re effective and sustainable, and if our project design and activities are appropriate.
They produce recommendations and lessons learned that help us improve.
Evaluation of an undertaking at different points in time
An investment case, a process or a project is typically divided into three distinct phases. In the beginning, an idea and
decision phase lasts until the final decision to implement is made. The implementation phase follows, continuing
until the project’s outputs are realized. The goal could be to build a building, to reorganize an organization, or to have
a student pass a final exam. Finally, there is an operational phase, in which the benefits of the project are realized or
revenue comes in.
Types of Evaluation
Stage of Project Purpose Types of
Evaluation
Conceptualization Phase Helps prevent waste and
identify potential areas of
concerns while increasing
chances of success.
Formative Evaluation
Implementation Phase Optimizes the project,
measures its ability to
meet targets, and suggest
improvements for
improving efficiency.
Process Evaluation
Outcome Evaluation
Economic Evaluation
Project Closure Phase Insights into the project’s
success and impact, and
highlight potential
improvements for
subsequent projects.
impact Evaluation
Summative Evaluation
Goals-Based Evaluation
Experience indicates that today, most evaluation activities occur in the implementation phase or just after its conclusion, options designated interim evaluation
and final evaluation, respectively. This is puzzling because the implementation phase is the period in which the project is least likely to benefit from an evaluation
(Samset 2003). An interim evaluation can help avoid or correct mistakes during a project, that is, it provides management information. A final evaluation assesses
the results at the conclusion of the implementation phase, that is, it provides control information.
Type Meaning/Purpose When What Why How Questions
to ask:
Formative
Evaluation
(also known as
‘evaluability
assessment’)
Formative evaluation is used before program design or
implementation. It generates data on the need for the program and
develops the baseline for subsequent monitoring. It also identifies
areas of improvement and can give insights on what the program’s
priorities should be. This helps project managers determine their
areas of concern and focus, and increases awareness of your
program among the target population prior to launch.
A formative evaluation aims to improve and refine an
existing project. (MEAL Dpro)
An evaluation conducted to improve performance, most
often during implementation phase of projects or programs.
Note: formative evaluation may also be conducted for other
reason such as compliance, legal requirements or as a part
of larger evaluation initiative (OECD)
New
program
developmen
t
Program
expansion
It’s carried
out early in
project
implementat
ion, up to
the
midpoint.
The need for your project among
the potential beneficiaries
The current baseline of relevant
indicators, which can help show
impact later
Helps make early
improvements to the
program
Allows project managers
to refine or improve the
program
Conduct sample
surveys and focus
group discussions
among the target
population focused
on whether they are
likely to need,
understand, and
accept program
elements.
Is there a need for
the program?
What can do to
improve it?
Process
Evaluation
(also known
as ‘program
monitoring’)
Process evaluation occurs once program implementation has begun,
and it measures how effective your program’s procedures are. The
data it generates is useful in identifying inefficiencies and
streamlining processes, and portrays the program’s status to
external parties.
A process evaluation aims to understand how well a project is being
implemented (or was implemented), particularly if you want to
replicate or enlarge your response.
When
program
implementa
tion begins
During
operation of
an existing
program
It’s carried
out during
project
implementa
tion (often
at the
midpoint) or
at the end.
Whether program goals and
strategies are working as they
should
Whether the program is reaching
its target population, and what
they think about it
Provides an opportunity
to avoid problems by
spotting them early
Allows program
administrators to
determine how well the
program is working
Conduct a review of
internal reports and a
survey of program
managers and a
sample of the target
population. The aim
should be to measure
the number of
participants, how
long they have to
wait to receive
benefits, and what
their experience has
been.
Who is being reached
by the program?
How the program is
being implemented
and what are the
gaps? Is it meeting
targets?
Outcome Outcome evaluation is conventionally used during program
implementation. It generates data on the program’s outcomes and
•After the
program
How much the program has
affected the target population
•Helps program
administrators tell
A randomized
controlled trial,
•Did participants
report the desired
Type Meaning/Purpose When What Why How Questions
to ask:
Economic Evaluation
(also known as
‘cost analysis’,
‘cost-effectiveness
evaluation’, ‘cost-
benefit analysis’,
and ‘cost-utility
analysis’)
Economic evaluation is used during the program’s
implementation and looks to measure the benefits of the
programs against the costs. Doing so generates useful
quantitative data that measures the efficiency of the program.
This data is like an audit, and provides useful information to
sponsors and backers who often want to see what benefits their
money would bring to beneficiaries.
•At the
beginning of a
program, to
remove potential
leakages
•During the
operation of a
program, to find
and remove
inefficiencies.
•What resources are
being spent and where
•How these costs are
translating into outcomes
•Program managers
and funders can justify
or streamline costs
•The program can be
modified to deliver more
results at lower costs
A systematic
analysis of the
program by
collecting data on
program costs,
including capital
and man-hours of
work. It will also
require a survey of
program officers
and the target
population to
determine potential
areas of waste.
•Where is the
program spending
its resources?
•What are the
resulting outcomes?
Impact
Evaluation
Impact evaluation studies the entire program from beginning to
end (or at whatever stage the program is at), and looks to
quantify whether or not it has been successful. Focused on the
long-term impact, impact evaluation is useful for measuring
sustained changes brought about by the program or making
policy changes or modifications to the program.
An impact or outcome evaluation aims to assess how well a project
met its goal to produce change. Impact evaluations can use rigorous
data collection and analysis and control groups.
•At the end of
the program
•At pre-selected
intervals in the
program
end of a project.
It requires
baseline data be
gathered at the
beginning of
implementation
and regular,
rigorous
monitoring
activities.
•Assesses the change in
the target population’s
well-being
•Accounts for what would
have happened if there
had been no program
•To show proof of
impact by comparing
beneficiaries with
control groups
•Provides insights to
help in making policy
and funding decisions
A macroscopic
review of the
program, coupled
with an extensive
survey of program
participants, to
determine the effort
involved and the
impact achieved.
Insights from
program officers
and suggestions
from program
participants are also
useful, and a control
group of non-
participants for
comparison is
helpful.
•What changes in
program
participants’ lives
are attributable to
your program?
•What would those
not participating in
the program have
missed out on?
Summative
Evaluation
Summative evaluation is conducted after the program’s
completion or at the end of a program cycle. It generates data
about how well the project delivered benefits to the target
population. It is useful for program administrators to justify the
•At the end of a
program
•At the end of a
program cycle
•How effectively the
program made the
desired change happen
•How the program
•Provides data to justify
continuing the program
•Generates insights into
the effectiveness and
Conduct a review of
internal reports and
a survey for
program managers
•Should the
program continue to
be funded?
•Should the
Type Meaning/Purpose When What Why How Questions
to ask:
Goals-Based
Evaluation
(also known as
‘objectively set
evaluation)
Goals-based evaluation is usually done towards the end of the
program or at previously agreed-upon intervals. Development
programs often set ‘SMART’ targets — Specific, Measurable,
Attainable, Relevant, and Timely — and goals-based evaluation
measures progress towards these targets. The evaluation is
useful in presenting reports to program administrators and
backers, as it provides them the information that was agreed
upon at the start of the program.
•At the end of the program
•At pre-decided milestones
•How the program
has performed on
initial metrics
•Whether the
program has
achieved its goals
•To show that
the program is
meeting its initial
benchmarks
•To review the
program and its
progress
This depends
entirely on the goals
that were agreed
upon. Usually,
goals-based
evaluation would
involve some
survey of the
participants to
measure impact, as
well as a review of
input costs and
efficiency.
•Has the program
met its goals?
•Were the goals
and objectives
achieved due to the
program or
externalities?
Ex post-
Sustained
and
Emerging
Impacts
Evaluation
(SEIE)
ex-post
evaluation
Sustained and Emerging Impacts Evaluation (SEIE)
refers to an evaluation that focuses on impacts some
time after the end of an intervention (which might be a
project, policy or group of projects or programs) or
after the end of participants’ involvement in an
ongoing intervention. It examines the extent to which
intended impacts have been sustained as well as what
unintended impacts have emerged over time (positive
and negative).
It is most commonly done for interventions with a
finite timeframe, such as projects funded or
implemented by international development agencies,
multilateral organizations or philanthropic foundations,
or local projects funded for a period of time by a
national government.
The timing for SEIE
evaluation should depend
on the expected change
trajectory. It needs to be
long enough afterwards to
be able to see some
change from initial
impacts, but not so long
that it is hard to collect
data and not possible to
use the information to
inform decisions.
SEIE provides
the “missing
piece” of
evaluation in
the
project/progra
mme cycle, as
shown in the
following
diagram,
developed
Ex Ante
Evaluation
Ex ante evaluation is a broad initial assessment aimed at
identifying which alternative will yield the greatest benefit from
an intended investment. More commonly, considerable
resources are used on detailed planning of a single, specific
Ex-Ante Evaluation is performed
before implementation of
development intervention. (DAC
Def)
Type Meaning/Purpose Potential Disadvantages of
Participatory and
Empowerment Evaluation
Participatory Evaluation
Two approaches are particularly useful
when framing an evaluation of community
engagement programs; both engage
stakeholders. In one, the emphasis is on
the importance of participation; in the
other, it is on empowerment. The first
approach, participatory evaluation,
actively engages the community in all
stages of the evaluation process
Participatory evaluation can help improve program performance by (1) involving key stakeholders in evaluation design and decision making, (2) acknowledging
and addressing asymmetrical levels of power and voice among stakeholders, (3) using multiple and varied methods, (4) having an action component so that
evaluation findings are useful to the program’s end users, and (5) explicitly aiming to build the evaluation capacity of stakeholders
Characteristics of Participatory evaluation
The focus is on participant ownership; the evaluation is oriented to the needs of the program stakeholders rather than the funding agency.
Participants meet to communicate and negotiate to reach a consensus on evaluation results, solve problems, and make plans to improve the program.
Input is sought and recognized from all participants.
The emphasis is on identifying lessons learned to help improve program implementation and determine whether targets were met.
The evaluation design is flexible and determined (to the extent possible) during the group processes.
The evaluation is based on empirical data to determine what happened and why.
Stakeholders may conduct the evaluation with an outside expert serving as a facilitator.
Two approaches are particularly useful when framing an evaluation of community engagement programs; both engage stakeholders. In one, the emphasis is on
the importance of participation; in the other, it is on empowerment.
The potential disadvantages of
participatory and empowerment
evaluation include (1) the possibility
that the evaluation will be viewed as
less objective because of stakeholder
involvement, (2) difficulties in
addressing highly technical aspects,
(3) the need for time and resources
when involving an array of
stakeholders, and (4) domination and
misuse by some stakeholders to
further their own interests. However,
the benefits of fully engaging
stakeholders throughout the
evaluation outweigh these concerns
(Fetterman et al., 1996).
Empowerment
evaluation
The second approach (, empowerment
evaluation, helps to equip program personnel
with the necessary skills to conduct their own
evaluation and ensure that the program runs
effectively. This section describes the purposes
and characteristics of the two approaches.
Empowerment evaluation is a stakeholder involvement approach designed to provide groups with the tools and knowledge they need to
monitor and evaluate their own performance and accomplish their goals. It is also used to help groups accomplish their goals.
Empowerment evaluation focuses on fostering self-determination and sustainability. It is particularly suited to
the evaluation of comprehensive community–based initiatives or place-based initiatives.
A number of theories guide empowerment evaluation:
•Empowerment theory focuses on gaining control of resources in one’s environment. It also provides a guide for
the role of the empowerment evaluator.
•Self-determination theory highlights specific mechanisms or behaviors that enable the actualization of
empowerment.
•Process use cultivates ownership by placing the approach in community and staff members’ hands.
•Theories of use and action explain how empowerment evaluation helps people “walk their talk” and produce
desired results.
Values improvement in people, programs, and organizations to help them achieve results.
Community ownership of the design and conduct of the evaluation and implementation of the findings.
Inclusion of appropriate participants from all levels of the program, funders, and community.
Democratic participation and clear and open
Characteristics of empowerment evaluation
Evaluation plans and methods.
Commitment to social justice and a fair allocation of resources, opportunities, obligations, and bargaining power.
Use of community knowledge to understand the local context and to interpret results.
Use of evidence-based strategies with adaptations to the local environment and culture.
Building the capacity of program staff and participants to improve their ability to conduct their own evaluations.
Organizational learning, ensuring that programs are responsive to changes and challenges.
Accountability to funders’ expectations.
Type Meaning/Purpose Others Type of
Evaluations
Why How Questions
to ask:
Meta Evaluation A meta-evaluation is an instrument used to aggregate
findings from a series of evaluations. It also involves an
evaluation of the quality of this series of evaluations and its
adherence to established good practice in evaluation.
A meta-analysis provides more robust results that can help
psychology researchers better understand the magnitude
of an effect. A meta-analysis provides important conclusions
and trends that can influence future research, policy-makers'
decisions, and how patients receive care.
A meta evaluation is a systematic and formal evaluation of
evaluations. It examines the methods used within an evaluation
or set of evaluations to bolster the credibility of findings.
This type is often used in policy-making settings. It’s carried out
external to project implementation cycle.
Cluster Evaluation: An
evaluation of set of related
activities, projects and/or
programs
External Evaluation: The
evaluation of development
intervention conducted by
entities and/or individuals
outside the donor or
implementation organizations.
(OECD)
Independent Evaluation: An
evaluation carried out by entities
or persons free of control of
those responsible for the design
and implementation of the
project. Note. The credibility of
evaluation depends in part
depends on how independently
it has been carried out.
Independently implies freedom
from political influence and
organizational pressure. It is
characterized by full access to
information and by full
autonomy in carrying out
investigations and reporting
findings (OECD)
Internal Evaluation:
Joint Evaluation
Mid-term evaluation:
Program Evaluation
Project Evaluation:
Sector Program Evaluation:
Self-Evaluation:
Evaluability assessments
(carried out before an
evaluation begins) can be
useful in setting realistic
expectations for what
information the evaluation
can provide, what evidence
can be gathered, and how
the evaluation will answer
questions.
Focused
Evaluation e.g.
KAP survey
Utilization-
Focused
Evaluation
Utilization-Focused Evaluation (UFE), developed by
Michael Quinn Patton, is an approach based on the
principle that an evaluation should be judged on its
usefulness to its intended users. Therefore evaluations
should be planned and conducted in ways that enhance
the likely utilization of both the findings and of the
process itself to inform decisions and improve
performance.
UFE has two essential elements. Firstly, the primary
intended users of the evaluation must be clearly
identified and personally engaged at the beginning of
the evaluation process to ensure that their primary
intended uses can be identified. Secondly, evaluators
must ensure that these intended uses of the evaluation
by the primary intended users guide all other decisions
that are made about the evaluation process.
Rather than a focus on general and abstract users and
uses, UFE is focused on real and specific users and
Type Meaning/Purpose Guide What Why How Questions
to ask:
Principles-Focused
Evaluation
In Principles-Focused Evaluation (PFE), the principles that guide an
initiative are evaluated. This approach was created by evaluator
Michael Quinn Patton for initiatives guided primarily by principles.
PFE is an approach to evaluation, not a specific set of steps. It can
look very different depending on the initiative, the principles, the
context, and the people conducting the evaluation.
Principles are statements of how human beings should act that apply
in a wide variety of situations. There are two main types of
principles: Moral Principles and Effectiveness Principles.
Moral Principles tell human beings how they should act for moral
reasons. For example, you may hold the principle, “Intervene when a
human rights violation is occurring” because you believe that
upholding a person’s human rights is the moral thing to do.
Effectiveness Principles tell human beings how they should act in
order to be effective at achieving a certain outcome. For example,
you may hold the principle, “Intervene when a human rights
violation is occurring” because you believe that this is the most
effective way to protect a person’s human rights.
For a Principles-Focused Evaluation, principles need to include
statements of how to behave. The six principles of Human Rights
(Universality, Indivisibility, Participation, Accountability,
Transparency and Non-Discrimination) can be used as both moral
and effectiveness principles. They can be evaluated in a Principles-
Focused Evaluation if they are made into clear statements of how to
behave. Please see Appendix A for examples of these statements for
Human Rights Principles.
Principles-focused evaluation informs choices about which principles
are appropriate for what purposes in which contexts, helping to
navigate the treacherous terrain of conflicting guidance and
competing advice. What principles work for what situations with
what results is an evaluation question. Thus, from an evaluation
perspective, principles are hypotheses not truths. They may or may
not work. They may or may not be followed. They may or may not
lead to desired outcomes. Whether they work, whether they are
followed, and whether they yield desired outcomes are subject to
evaluation. Learning to evaluate principles, and applying what you
learn from doing so, takes on increasing importance in an ever more
complex world where our effectiveness depends on adapting to
The GUIDE framework for
principles describes a well-
defined principle as: Guiding
(providing guidance) Useful
Inspiring Developmental
(supportive of ongoing
learning, growth, and
adaptation) Evaluable (able
to be evaluated)
You can use this framework
as a tool for defining
principles. For example, if we
state a campaign principle of
non-discrimination: “Work to
reduce inequities in power
and resources”, you could use
the GUIDE criteria to test it.
You could do this by asking
member of your campaign, “Is
this principle providing our
campaign guidance? Is it
useful? Is it inspiring? Does it
support us in learning,
growing and adapting? Is the
principle clear enough that we
can evaluate it?
Type Meaning/Purpose When What Why How Questions
to ask:
Developmental
Evaluation or
Development,
Development
evaluation
Developmental Evaluation (DE) is an evaluation approach that can
assist social innovators develop social change initiatives in complex
or uncertain environments. DE originators liken their approach to the
role of research & development in the private sector product
development process because it facilitates real-time, or close to real-
time, feedback to program staff thus facilitating a continuous
development loop.
Michael Quinn Patton is careful to describe this approach as one
choice that is responsive to context. This approach is not intended as
the solution to every situation.
Development evaluation is particularly suited to innovation, radical
program re-design, replication, complex issues, crises
In these situations, DE can help by: framing concepts, test quick
iterations, tracking developments, surfacing issues.
This description is from Patton (2010) Developmental Evaluation.
Applying Complexity Concepts to Enhance Innovation and Use...
"Developmental Evaluation supports innovation development to
guide adaptation to emergent and dynamic realities in complex
environments. Innovations can take the form of new projects,
programs, products, organizational changes, policy reforms, and
system interventions. A complex system is characterized by a large
number of interacting and interdependent elements in which there
is no central control. Patterns of change emerge from rapid, real
time interactions that generate learning, evolution, and
development – if one is paying attention and knows how to observe
and capture the important and emergent patterns. Complex
environments for social interventions and innovations are those in
which what to do to solve problems is uncertain and key
stakeholders are in conflict about how to proceed."
Developmental evaluation helps an organisation to generate rapid
learning to support the direction of the development of a program,
and/or affirm the need for a change of course. DE provides real-time
feedback so that the program stakeholders can implement new
measures and actions as goals emerge and evolve.
Potential Disadvantages of Participatory and Empowerment Evaluation
The potential disadvantages of participatory and empowerment
evaluation include (1) the possibility that the evaluation will be viewed
as less objective because of stakeholder involvement, (2) difficulties in
addressing highly technical aspects, (3) the need for time and
resources when involving an array of stakeholders, and (4) domination
and misuse by some stakeholders to further their own interests.
However, the benefits of fully engaging stakeholders throughout the
evaluation outweigh these concerns (Fetterman et al., 1996).
A basic monitoring system consists of:
• Identified objects and indicators to be examined pertaining to input,
including expenditures, Output, outcome and impact (What).
• Methods/means of verification and frequency of data collecting
concerning the indicators (how, when, by whom?);
• Processing and analysis of the data; and
• Defining corrective actions.
M&E plan
• M&E plan, which describes how the whole M&E system for the
program works, including things like who is responsible for it, what
forms and tools will be used, how the data will flow through the
organisation, and who will make decisions using the data.
• In other organisations the whole M&E plan is called an M&E
framework (as if things weren’t confusing enough!).
http://www.tools4dev.org/resources/monitoring-evaluation-plan-
template/
What is an M&E system?
• As with many things in international development, the precise
definition of an M&E system varies between different organisations.
• In most cases an M&E system refers to all the indicators, tools and
processes that you will use to measure if a program has been
implemented according to the plan (monitoring) and is having the
desired result (evaluation).
• An M&E system is often described in a document called an M&E plan.
An M&E framework is one part of that plan.
Monitoring and evaluation (M&E) framework
• there is no standard definition of a Monitoring and Evaluation (M&E)
framework, or how it differs from an M&E plan.
• For many organisations, an M&E framework is a table that describes the
indicators that are used to measure whether the program is a success.
• The M&E framework then becomes one part of the M&E plan, which
describes how the whole M&E system for the program works, including
things like who is responsible for it, what forms and tools will be used, how
the data will flow through the organisation, and who will make decisions
using the data. In other organisations the whole M&E plan is called an M&E
framework (as if things weren’t confusing enough!).
Note: An M&E framework can also be called an evaluation matrix.
(M&E) Framework Example
INDICATOR DEFINITION
How is it calculated?
BASELINE
What is the current
value?
TARGET
What is the target value?
DATA SOURCE
How will it be measured?
FREQUENCY
How often will it be
measured?
RESPONSIBLE
Who will measure it?
REPORTING
Where will it be
reported?
Goal Percentage of Grades 6 primary students
continuing on to high school.
Number students who start the first
day of Grade 7 divided by the total
number of Grade 6 students in the
previous year, multiplied by 100.
50% 60% Primary and high school
enrolment records.
Annual Program manager Annual enrolment report
Outcomes Reading proficiency among children in Grade
6.
Sum of all reading proficiency test
scores for all students in Grade 6
divided by the total number of
students in Grade 6.
Average score: 47 Average score: 57 Reading proficiency tests using
the national assessment tool.
Every 6 months Teachers 6 monthly teacher
reports
Outputs Number of students who completed a
summer reading camp.
Total number of students who were
present on both the first and last day
of the summer reading camp.
0 500 Summer camp attendance
records.
End of every camp Teachers Camp review report
Number of parents of children in Grade 6
who helped their children read at home in
the last week.
Total number of parents who answered
“yes” to the question “Did you help
your child read at home any time in the
last week?”
0 500 Survey of parents. End of every camp Program officer Survey report
Ultimate Impact End Outcomes Intermediate Outcomes Outputs Interventions
Needs-based Higher Consequence Specific Problem Cause Solution Process Inputs
CARE terminology Program Impact Project Impact Effects Outputs Activities Inputs
CARE logframe Program Goal Project Final Goal Intermediate Objectives Outputs Activities Inputs
PC/LogFrame Goal Purpose Outputs Activities
USAID Results Framework Strategic Objective Intermediate Results Outputs Activities Inputs
USAID Logframe Final Goal Strategic Goal/ Objective Intermediate results Activities 202E
DANIDA + DfID Goal Purpose Outputs Activities
CIDA + GTZ Overall goal Project purpose Results/outputs Activities Inputs
European Union Overall Objective Project Purpose Results Activities
FAO + UNDP + NORAD Development Objective Immediate Objectives Outputs Activities Inputs
UNHCR Sector Objective Goal Project Objective Outputs Activities Input/Resources
World Bank Long-term Objectives Short-term Objectives Outputs Inputs
AusAID Scheme Goal Major Development
Objectives
Outputs Activities Inputs
COMPARISONS BETWEEN TERMINOLOGIES OF DIFFERENT DONOR AGENCIES for RESULTS / LOGICAL FRAMEWORKS
Compiled by Jim Rugh for CARE International and InterAction’s Evaluation Interest Group
Part of monitoring procedures is the Project Review.
• Project review is a specific formal examination of Project implementation
towards attaining its Objective, as part of the Project monitoring Activities.
• Real Time Reviews
• Real Time Reviews are often considered as a type
of evaluation and it has the same objectives goals
i.e. to see if we are achieving our goals etc.
Real Time Reviews are conducted while the response /
programmes are still on-going and it aims to inform
the next phase of the response / programmes as well
as future response / programmes. Often this is done
after the first phase of the response / programme -
between 3-6 months in.
Review
Theory of Change
• Theory of change’ is an outcomes-based approach which applies critical thinking to the design,
implementation and evaluation of initiatives and programmes intended to support change in their
contexts.
• The description of a sequence of events that is expected to lead to a particular desired outcome
• ‘Theory of change is an on-going process of reflection to explore change and how it happens - and
what that means for the part we play in a particular context, sector and/or group of people.
• ‘Theory of change is a dynamic, critical thinking process, it makes the initiative clear and
transparent - it underpins strategic planning. It is developed in a participatory way over time,
following a logical structure that is rigorous and specific, and that can meet a quality test by the
stakeholder. The terminology is not important, it is about buying into the critical thinking.’ Helene
Clark, Act Knowledge
Theory of Change Logical Framework
• In practice, a Theory of Change typically:
• Gives the big picture, including issues related to the
environment or context that you can’t control.
• Shows all the different pathways that might lead to
change, even if those pathways are not related to
your program.
• Describes how and why you think change happens.
• Could be used to complete the sentence “if we do
X then Y will change because…”.
• Is presented as a diagram with narrative text.
• The diagram is flexible and doesn’t have a
particular format – it could include cyclical
processes, feedback loops, one box could lead to
multiple other boxes, different shapes could be
used, etc.
• Describes why you think one box will lead to
another box (e.g. if you think increased knowledge
will lead to behaviour change, is that an
assumption or do you have evidence to show it is
the case?).
• Is mainly used as a tool for program design and
evaluation.
• In practice, a Logical Framework:
• Gives a detailed description of the program
showing how the program activities will lead
to the immediate outputs, and how these will
lead to the outcomes and goal (the
terminology used varies by organisation).
• Could be used to complete the sentence “we
plan to do X which will give Y result”.
• Is normally shown as a matrix, called a
logframe. It can also be shown as a flow chart,
which is sometimes called a logic model.
• Is linear, which means that all activities lead to
outputs which lead to outcomes and the goal
– there are no cyclical processes or feedback
loops.
• Includes space for risks and assumptions,
although these are usually only basic. Doesn’t
include evidence for why you think one thing
will lead to another.
• Is mainly used as a tool for monitoring.
Indicators Defined
• An indicator provides evidence that a certain condition exists or
certain results have or have not been achieved (Brizius & Campbell,
p.A-15).
• Indicators enable decision-makers to assess progress towards the
achievement of intended outputs, outcomes, goals, and objectives. As
such, indicators are an integral part of a results-based accountability
system.
Signals that show whether the standard has been attained
 Tools to measure and communicate the impact or result
 May be qualitative or quantitative
Community-Level Indicators:
Some Examples
Management
Management is the planning, organizing, leading, and controlling of resources to achieve
organizational goals effectively and efficiently.
• Planning is choosing appropriate organizational goals and the correct directions to
achieve those goals.
• Organizing involves determining the tasks and the relationships that allow employees to
work together to achieve the planned goals.
• With leading, managers motivate and coordinate employees to work together to achieve
organizational goals.
• When controlling, managers monitor and measure the degree to which the organization
has reached its goals
Management is the process of efficiently and effectively acquiring, developing, protecting,
and utilizing organizational resources in the pursuit of organizational goals.
Social Cohesion
• The OECD [1] defines social cohesion as:
A cohesive society works towards the well-being of all its members, fights exclusion and marginalisation, creates a
sense of belonging, promotes trust, and offers its members the opportunity of upward mobility.
While the notion of ‘social cohesion’ is often used with different meanings, its constituent elements include
concerns about social inclusion, social capital and social mobility. Some of these elements can be quantified, and
some countries have taken steps to develop suitable metrics in this field, e.g. through specific surveys assessing
different aspects of people’s social connections and civic engagement.
However, most researchers define cohesion to be task commitment and interpersonal attraction to the group.[3][4]
Cohesion can be more specifically defined as the tendency for a group to be in unity while working towards a goal or
to satisfy the emotional needs of its members.[4]
• EU council define : the capacity of a society to ensure the well-being of all its members – minimising disparities
and avoiding marginalization – to manage differences and divisions and ensure the means of achieving welfare for
all members. It is a political concept.
• Social cohesion is a dynamic process and is essential for achieving social justice, democratic security and
sustainable development. Divided and unequal societies are not only unjust, they also cannot guarantee stability
in the long term. In a cohesive society the well-being of all is a shared goal that includes the aim of ensuring
adequate resources are available to combat inequalities and exclusion.
WDR’s framework [2] on social cohesion emphasises the way societies and groups manage
possible collective action problems arising from economic and social transformations in
and around national labor markets.
Planning
• Planning is a way to organize actions that will hopefully lead to the
fulfillment of a goal.
• A plan is a way to organize actions that will lead to the fulfillment of a
goal by providing direction and an approach to follow.
• How? By providing clear direction and an approach to follow--giving a
method to your madness, so to speak.
WHY SHOULD YOU DEVELOP A PLAN?
• To make your life easier, of course. But, more specifically:
• To help you map out how to get from point A to point B
• To make sure that u work in more efficient and effective. A plan is
important because it focuses on the set of steps you will need to go
through to achieve your ultimate goal of recruiting members. The
planning stage is the time to decide what actions the organization will
take to achieve its goal.
Development
• The cumulative and lasting increase, tied to social changes in the
quantity and quality of a community’s goods, services and resources,
with the purpose of maintaining and improving the quality and
security of human life.
• But to define development as an improvement in people's well-being
does not do justice to what the term means to most of us.
Development also carries a connotation of lasting change.
Sustainable Development
• Development that meets the needs of the present without
compromising the ability of future generations to meet their own
needs.
• It contains within it two key concepts: the concept of "needs", in
particular the essential needs of the world's poor, to which
overriding priority should be given; and the idea of limitations
imposed by the state of technology and social organization on the
environment's ability to meet present and the future needs.
(Brundtland Commission, 1987).
An Overview of Strategic Planning
or "VMOSA"
What is VMOSA?
• Vision
• Mission
• Objectives
• Strategies
• Action Plans
What is VMOSA?
• A practical strategic planning tool.
• A blueprint for moving from dreams to actions to
outcomes.
• An ongoing process.
Why use VMOSA?
• To give your organization structure and direction.
• To help build consensus about what to do and how
to do it.
• To focus your organization's efforts.
When to use V M O S A
• New organization.
• New initiative or large project.
• New phase of ongoing effort.
• Breathe life into an older initiative.
Vision Statements: The dream
• Examples:
• "Healthy adolescents"
• "Healthy babies"
• "Caring parents"
• "A community of hope"
• "Safe sex"
• "Teen power"
• "Caring relationships"
Mission: The what and why
Examples:
• To build a healthy community through a comprehensive
initiative to promote jobs, education, and housing.
• To promote adolescent health and development through
school and community support and prevention.
Objectives: The how much of what
will be accomplished by when
Examples:
• By 2005, increase by 40% the number of adults
who report caring activities with a child not their
own.
• By 2015, decrease by 25% the number of reported
cases of child abuse and neglect.
Strategies: The how
• Examples:
• Enhance experience and competence.
• Remove barriers.
• Increase support and resources.
• Make outcomes matter.
Action Plan: The specifics of who will
do what, by when, at what costs
• These consist of:
• Action steps (what will be done).
• People responsible (by whom).
• Date completed (by when).
• Resources required (costs).
• Collaborators (who should know).
The Sphere Project
 A process that began in 1997 to address concerns of quality and
accountability in humanitarian responses
 Humanitarian Charter that emphasizes the “right to life with dignity”
 Minimum Standards in Disaster Response
Water, sanitation and hygiene promotion
Food security, nutrition and food aid
Shelter, settlement and non-food items
Health services
www.sphereproject.org
What is a natural hazard vs a disaster?
 A natural hazard is a natural phenomenon that can potentially trigger
a disaster
 Examples include earthquakes, mud-slides,
floods, volcanic eruptions, tsunamis, drought
 These physical events need not necessarily result in disaster
 A disaster is a serious disruption of the functioning of a community
or a society involving widespread human, material, economic or
environmental losses and impacts, exceeding the ability of the
community to cope using own resources
The product of hazards over which we have no control. It combines:
 the likelihood or probability of a disaster happening
 the negative effects that result if the disaster happens
–these are increased by vulnerabilities (characteristics/circumstances
that make one susceptible to damaging effects of a hazard)
–and decreased by capacities (combination of strengths, attitudes
and resources)
Acceptable Risk
The level of loss a society or community considers it can live with and
for which it does not need to invest in mitigation
Risk assessment/ Analysis
A methodology to determine the nature and extent of risk by Analyzing
potential hazards and evaluating existing vulnerability that could pose a
potential threat to people, property, livelihoods and the environment
What is risk?
Emergency Management
The management and deployment of resources for dealing with
all aspects of emergencies, in particularly preparedness, response and
rehabilitation.
Disaster Risk Management (DRM)
The comprehensive approach to reduce the adverse impacts of a disaster. It
encompasses all actions taken before, during, and after the disasters. It
includes activities on mitigation, preparedness, emergency response, recovery,
rehabilitation, and reconstruction.
Disaster Risk Reduction (disaster reduction)
The measures aimed to minimize vulnerabilities and disaster risks throughout a
society, to avoid (prevention) or to limit (mitigation and preparedness) the
adverse impacts of hazards, within the broad
context of sustainable development.
Prevention: outright avoidance of the adverse affects of hazards / disasters OR
Activities to ensure complete avoidance of the adverse impact of hazards
Mitigation: the process of lessoning or limiting the adverse affects of hazards /
disasters OR Structural and nonstructural measures undertaken to limit the
adverse impact of natural hazards, environmental degradation and
technological hazards.
Preparedness: knowledge and capacities to effectively anticipate, respond to
and recover from impacts of likely hazard. Activities and measures taken in
advance to ensure effective response to the impact of hazards, including the
issuance of timely and effective early warnings and the temporary
evacuation of people and property from threatened locations.
Risk Reduction: practice of reducing risks through systematic efforts to analyze
and manage the causal factors of disasters, including through reduced
exposure, lessened vulnerability, improved preparedness
Response: provision of emergency services to save lives, meet needs
Terminology
Natural Hazards
Natural processes or phenomena occurring on the earth that may constitute a
damaging event. Natural hazards can be classified by origin namely: geological,
hydro meteorological or biological. Hazardous events can vary in magnitude or
intensity, frequency, duration, area of extent, speed of onset, spatial dispersion and
temporal spacing
Hazard Analysis
Identification, studies and monitoring of any hazard to determine its potential,
origin, characteristics and behaviour.
Early Warning
The provision of timely and effective information, through identified institutions to
communities and individuals so that they can take action to reduce their risk
and prepare for effective response.
Climate Change
The climate of a place or region is changed if over an extended period (typically decades or longer) there is a
statistically significant change in measurements of either the
mean state or variability of the climate for that region.
Hazard
A potentially damaging physical event or phenomenon that may cause the loss of life or injury, property damage,
social and economic disruption or environmental degradation. Hazards can include natural (geological, hydro
meteorological and biological) or induced by human processes (environmental degradation and technological
hazards). Hazards can be single, sequential or combined in their origin and effects. Each hazard is characterized by
its location, intensity, frequency and probability.
In general, less developed countries are more vulnerable to natural hazards than are industrialized
countries because of lack of understanding, education, infrastructure, building codes, etc. Poverty also
plays a role - since poverty leads to poor building structure, increased population density, and lack of
communication and infrastructure
Vulnerability to Hazards and Disasters
Vulnerability refers the way a hazard or disaster will affect human life and property
Vulnerability to a given hazard depends on:
Proximity to a possible hazardous event
Population density in the area proximal to the event
Scientific understanding of the hazard
Public education and awareness of the hazard
Existence or non-existence of early-warning systems and lines of communication
Availability and readiness of
emergency infrastructure
Construction styles and building codes
Cultural factors that influence public response to warnings
Risk and vulnerability can sometimes be reduced if there is an adequate means of predicting a
hazardous event. E.g. prediction, early warning system, forecasting etc.
Development and habitation of lands susceptible to hazards, For example, building on
floodplains subject to floods, sea cliffs subject to landslides, coastlines subject to
hurricanes and floods, or volcanic slopes subject to volcanic eruptions.
Increasing the severity or frequency of a natural disaster. For example: overgrazing or
deforestation leading to more severe erosion (floods, landslides), mining groundwater
leading to subsidence, construction of roads on unstable slopes leading to landslides, or
even contributing to global warming, leading to more severe storms.
Affluence can also play a role, since affluence often controls where habitation takes place, for
example along coastlines, or on
volcanic slopes. Affluence also likely contributes to global warming, since it is the affluent societies that burn
the most fossil fuels adding CO2 to the atmosphere.
Assessing Hazards and Risk
Hazard Assessment and Risk Assessment are 2 different concepts!
Hazard Assessment consists of determining the following
when and where hazardous processes have occurred in the past.
the severity of the physical effects of past hazardous processes (magnitude).
the frequency of occurrence of hazardous processes.
the likely effects of a process of a given magnitude if it were to occur now.
and, making all this information available in a form useful to planners and public
officials responsible for making decisions in event of a disaster.
Risk Assessment
involves not only the assessment of hazards from a scientific point of view, but also the socio-economic impacts of
a hazardous event. Risk is a statement of probability that an event will cause x amount of damage or a statement
of the economic impact in monetary terms that an event will cause. Risk assessment involves hazard assessment,
as above, location of buildings, highways, and other infrastructure in the areas subject to hazards
potential exposure to the physical effects of a hazardous situation the vulnerability of the community when
subjected to the physical effects of the event.
Risk assessment aids decision makers and scientists to compare and evaluate potential hazards, set priorities on
what kinds of mitigation are possible, and set priorities on where to focus
resources and further study.
Poverty
Deprivation
Life circumstances can include…
Poor housing
and
homelessness
Community infrastructure primarily refers to small scale basic structures, technical facilities and systems
built at the community level that are critical for sustenance of lives and livelihoods of the population living
in a community.
Community infrastructure is an integral sub-sector of the infrastructure sector.
These are low-cost small-scale infrastructures built over time through community-led initiatives according
to the needs and aspirations of the community population.
These micro infrastructures are socially, economically and operationally linked with community lives and livelihood
options, ensure basic services to its population and are thus conceived as critical lifelines for survival of the
community;
However, drawing a line between main infrastructure and community infrastructure is not easy, and a globally
accepted definition for community infrastructure does not yet exist
Community infrastructure
Behavior change communication (BCC) is an interactive process of any intervention
with individuals, communities and/or societies (as integrated with an overall program)
to develop communication strategies to promote positive behaviors which are
appropriate to their settings.
Behavior change communication (BCC)
The Conceptual Framework: How BCC Works
Community infrastructure
Community infrastructure primarily refers to small scale basic structures, technical facilities and systems
built at the community level that are critical for sustenance of lives and livelihoods of the population living
in a community.
Community infrastructure is an integral sub-sector of the infrastructure sector.
These are low-cost small-scale infrastructures built over time through community-led initiatives according
to the needs and aspirations of the community population.
These micro infrastructures are socially, economically and operationally linked with community lives and livelihood
options, ensure basic services to its population and are thus conceived as critical lifelines for survival of the
community;
However, drawing a line between main infrastructure and community infrastructure is not easy, and a globally
accepted definition for community infrastructure does not yet exist
Community infrastructure
Differences between quantitative and qualitative research methods.
Differences between quantitative and qualitative research methods.
Definitions
Evaluation; is the systematic and objective
assessment of an ongoing or completed
project, program or policy focusing on
design, implementation and results.
 Also defined as the systematic application
of social research procedures for assessing
the conceptualisation, design,
implementation and utility of programs.
Research; systematic investigation designed to
develop or contribute to generalised
knowledge; includes developing, testing
and evaluating the hypothesis.
Evaluation research A type of applied research
in which one tries to determine how well a
program or policy is working or reaching its
goals and objectives.
To understand the concept of Program Evaluation, it is essential to know its
difference with Research, as most often these two concepts are confused between
each other. A common question that is raised – is research and evaluation same or
different? Unfortunately, there is a no easy answer for this. There has
been alternative theorisation among experts in establishing the distinction
between research and evaluation. There are various schools of thought or
typologies that exist which view research and evaluation from different lenses.
The four different typologies are as follows –
Evaluation as a sub-section of research
This school of thought is premised on the notion that – Doing research does not
necessarily require doing evaluation. However, doing evaluation always requires
doing research.
Research as a sub-section of Evaluation
The second school of thought on the distinction between research and evaluation
is that research is only a sub-section of evaluation. According to this notion,
the research part of evaluation involves only collecting and analysing empirical
data.
Research and Evaluation not mutually exclusive
The third school of thought views research and evaluation as two unrelated variables
that are not mutually exclusive. An activity can be both research and evaluation –
or neither. Research is about being empirical. Evaluation is about drawing
evaluative conclusions about quality, merit or worth.
Research and evaluation are dichotomous ‫نوعیتی‬ ‫دو‬
Another school of thought considers research and evaluation as two completely
separate stream of producing knowledge. Evaluation is viewed as more interested in
specific, applied knowledge, and more controlled by those funding or commissioning
the evaluation. Research on the other hand, is considered as interested in producing
generalisable knowledge which are theoretical and controlled by the researchers.
Distinguish feature Research Evaluation(s)
Agenda
Generally involves greater control (though often constrained by funding
providers); researchers create and construct the field.
Work within a given brief / a set of “givens” – e.g. programme, field, participants, terms of reference and agenda,
variables.
Audiences Disseminated widely and publicly. Often commissioned and become the property of the sponsors; not for the public domain.
Data sources and types More focused body of evidence.
Has a wide field of coverage (e.g costs, benefits, feasibility, justifiability, needs, value for money) – so tends to
employ a wider and more eclectic range of evidence from an array of disciplines and sources than research.
Decision making Used for macro decision making. Used for micro decision making.
Focus Concerned with how something works. Concerned with how well something works.
Origins From scholars working in a field. Issued from/by stakeholders.
Outcome focus May not prescribe or know its intended out comes in advance. Concerned with the achievement of intended outcomes.
Ownership of data Intellectual property held by the researcher. Cedes ownership to the sponsor, upon completion.
Participants Less (or no) focus on stakeholders. Focuses almost exclusively on stakeholders.
Politics of the situation Provides information for others to use. May be unable to stand outside the politics of the purposes and uses of (or participants in) an evaluation.
Purposes
Contributes to knowledge in the field, regardless of its practical application;
provides empirical information – i.e. “what is”.
Conducted to gain, expand and extend knowledge; to generate theory,
“discover” and predict what will happen.
Designed to use the information / facts to judge the worth, merit, value, efficacy, impact and effectiveness of
something – i.e. “what is valuable”.
Conducted to assess performance and to provide feedback; to inform policy making and “uncover”. The concern is
with what has happened or is happening.
Relevance
Can have wide boundaries (e.g. to generalise to a wider community); can be
prompted by interest rather than relevance.
Relevance to the programme or what is being evaluated is a prime feature. Has to take particular account of
timeliness and particularity.
Reporting
May include stakeholders / commissioners of research – but may also report
more widely (e.g. in publications
Reports to stakeholders and commissioners of research.
Scope
Often (though not always) seeks to generalise (external validity) and may not
include evaluation.
Concerned with the particular – e.g. a focus only on specific programmes. Seeks to ensure internal validity and ofte
has a more limited scope.
Stance Active and proactive. Reactive.
Standards for judging quality
Judgements are made by peers; standards for which include: validity, reliability,
accuracy, causality, generalizability, rigour.
Judgements are made by stakeholders; standards for which also include: utility, feasibility, involvement of
stakeholders, side effects, efficacy, fitness for purpose.
Status An end in itself. A means to an end.
Time frames
Often ongoing and less time bound: although this is not the case with funded
research.
Begins at the start of a project and finishes at its end.
Use of results
Designed to demonstrate or prove.
Provides the basis for drawing conclusions, and information on which others
might or might not act – i.e. it does not prescribe.
Designed to improve.
Provides the basis for decision making; might be used to increase or withhold resources or to change practice.
• Applied Research
Applied Research refers to scientific research that seeks to solve
practical (programmatic) problem(s). It's used to find solutions, cure
illness, develop innovations and new strategies. We can use it to design
new programmes.
Survey Methodologies
• “Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the
public. Accurate random sampling and high response rates will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased
questions.” (1) There are three main types of survey methodologies, and each has their own risks and benefits.
• Open-ended Questions
Examples of open-ended questions:
• “What do you think are the reasons some adolescents in this area start using drugs?”
• “What would you do if you noticed that your daughter (school girl) had a relationship with a teacher?”
• Partially Categorized Questions
Example of a pre-categorized open-ended question:
“How did you become a member of the Village Health Committee?” (4)
Categorize the response into these options:
•Volunteered
•Elected at a community meeting
•Nominated by community leaders
•Nominated by the health staff
•Other ____
Closed Questions
• Did you eat any of the following foods yesterday? (6)
• Peas, beans, lentils (yes/no)
• Fish or meat (yes/no)
• Eggs (yes/no)
• Milk or cheese (yes/no)
• Insects (yes/no)
KAP Survey
• KAP surveys are focused evaluations that measure changes in human knowledge, attitudes and practices in response to a specific intervention. The KAP survey was first used in the fields of family planning and
population studies in the 1950s. KAP studies use fewer resources and tend to be more cost effective than other social research methods because they are highly focused and limited in scope. KAP studies tell us
behave. Each study is designed for a specific setting and issue.(11) “The attractiveness of KAP surveys is attributable to characteristics such as an easy design, quantifiable data… concise presentation of results,
cultural comparability, speed of implementation, and the ease with which one can train numerators.” (12) In addition, KAP studies bring to light the social, cultural and economic factors that may influence health
recognition within the international aid community that improving the health of poor people across the world depends upon adequate understanding of the socio-cultural and economic aspects of the context in
typically been gathered through various types of cross-sectional surveys, the most popular and widely used being the knowledge, attitude, and practice (KAP) survey.
• KAP Research Protocols
• KAP Surveys and Public Health
The Shortcomings of KAP Surveys
• Data Can Be Hard to Interpret Accurately
• Lack of Standardized Approach to Validate Findings
• Analyst Biases in KAP Surveys
• Other Criticisms
• A main criticism of KAP surveys is that their findings generally lead to prescriptions for mass behavior modification instead of targeting interventions towards individuals. For example, a study which used KAP
diffuse behaviors in undifferentiated populations are not productive in low-seroprevalence populations, especially when the objective is to design interventions to avert further infection. The failure of KAP surveys
behavioral data for individuals and for populations makes them fundamentally flawed for such purposes.” Other major problems with KAP surveys are that investigators use the surveys to explain health behavior
and action. (29) A study on malaria control in Vietnam found that though respondents had a surprisingly high level of knowledge and awareness regarding malaria, “the findings are of limited value because of the
preventive actions and health-seeking behaviors. Anecdotal evidence suggests there are deficiencies in these important practices, but the study design did not permit us to explore these.” (30) In addition, though
fail to explain why and when certain treatment practices are chosen. In other words, they fail to explain the logic behind treatment-seeking practices.
• The Alternative
• KAP surveys can be useful when the research plan is to obtain general information about public health knowledge and sociological variables. However,“if the objective is to study health-seeking knowledge,
available, including focus group discussions, in-depth interviews, participant observation, and various participatory methods.” (32) The preferred use of qualitative surveys and research is corroborated by a study
generated useful findings, an initial, qualitative investigation (eg. observation and focus group discussions) to explore the large numbers of potential influences on behavior and exposure risk would have provided
have strengthened its validity and generated additional information.” (33) A study conducted by Agyepong and Manderson (1999) also confirms this notion and argues “that truly qualitative methods, such as
are vital foundations for exploratory investigations at the community level, and should precede and underpin population-level approaches, such as KAP surveys.”
• Conclusion
• The survey is critical to designing public health interventions and assessing their impact. There are a variety of different methodologies that can be used when designing surveys: open-ended questions, partially
has its own benefits and drawbacks, though partially categorized questions are considered to yield the most accurate and reliable data. KAP surveys explore respondents’ knowledge, attitudes and practices
characteristics, knowledge, attitudes and practices that may serve to explain health risks and behaviors. Though they are very useful for obtaining general information about sociological and cultural variables,
study or survey.
[1] CARE Impact Guidelines, October 1999.
[2] PC/LogFrame (tm) 1988-1992 TEAM technologies, Inc.
[3] Results Oriented Assistance Sourcebook, USAID, 1998.
[4] The Logical Framework Approach to portfolio Design, Review and Evaluation in A.I.D.: Genesis, Impact, Problems and Opportunities. CDIE, 1987.
[5] A Guide to Appraisal, Design, Monitoring , Management and Impact Assessment of Health & Population Projects, ODA [now DFID], October 1995
[6] Guide for the use of the Logical Framework Approach in the Management and Evaluation of CIDA’s International Projects. Evaluation Division.
[7] ZOPP in Steps. 1989.
[8] Project Cycle Management: Integrated Approach and Logical Framework, Commission of the European Communities Evaluation Unit Methods and Instruments for Project Cycle Management, No. 1, February 1993
[9] Project Appraisal and the Use of Project Document Formats for FAO Technical Cooperation Projects. Pre-Course Activity: Revision of Project Formulation and Assigned Reading. Staff Development Group, Personnel Division, August 19
[10] UNDP Policy and Program Manual
[11] The Logical Framework Approach (LFA). Handbook for Objectives-oriented Project Planning.
[12] Project Planning in UNHCR: A Practical Guide on the Use of Objectives, Outputs and Indicators for UNHCR Staff and Implementing Partners. Second Ver. March 2002.
[13] AusAID NGO Package of Information, 1998
COMPARISONS BETWEEN TERMINOLOGIES OF DIFFERENT DONOR AGENCIES for RESULTS / LOGICAL FRAMEWORKS
Compiled by Jim Rugh for CARE International and InterAction’s Evaluation Interest Group
This table has been referred to as “’The Rosetta Stone of Logical Frameworks”
CHARACTERISTICS OF A CASE STUDY
1. Intensive Study
2. Indepth Examination
3. Systematic way ffor collection, analysis, reporting
4. Understanding why and What?
5. Generating and Testing Hypothesis
6. Involvement of Stakeholders in identification of variables
7. Ratifies data/numbers
Qualitative Research Method (Who, What, when, where, why and How?)
It is important method because:
1. Indepth analysis for informed decision making
2. Inspires for Innovation and Reflection
3. Validate Hypothesis
4. Integrated study of influencing factors leading to holistic views.
5. Useful to build credibility and evidence
6. Helps in Sharing knowledge to different target audeiences.
Success Story
As part of the world cafe exercise certain characteristics of a success story were discussed and agreed upon. Some of
them are:
•Short, clear, simple and readable
•Reflecting positive change and transformation.
•Highlighting project interventions.
•Empowering for communities
•Generates a buzz/attracts immediate attention
COMPARISONS BETWEEN SUCCESS STORY & CASE STUDY
Good Practices
Best Practice. Something that we have learned from experience on a number of similar projects around the world. This requires looking at a number of “lessons- learned” from
projects in the same field and noticing a trend that seems to be true for all projects in thatfield.
Key characteristics:
Replicable/adaptable
Sustainability
Proven process & methodology (within a geographical location)
Reflect the process
Community owned & tested procedures
Tested innovations
Reasons they are useful to generate:
Sustain over a period of time
Provide roadmap for scaling up
Saves time
Using knowledge for further replication
Sharing knowledge by providing options & choices
lesson learned: Key characteristics and the reasons for sharing lessons learned
A short, simple description of something we’ve learned from experience on a specific project or program. It should be supported with evidence from our monitoring
and evaluation. Lessons- learned should be useful to other people implementing similar projects around the world.
Key characteristics
Lessons learned come from real experiences, feedback and other processes in the project cycle.
• These lessons learned are factual
• Lessons learned are generated within a time frame
• They are derived from reflection, analysis and other qualitative and quantitative data
• Lessons learned can be generated individually and through the involvement of other stakeholders.
Reasons
Lessons learned can be an important input in project implementation
• It helps in improving outputs and outcomes.
• Lessons learned used effectively can save time, energy and resources
• It can be huge knowledge base for designing future activities, innovations.
• Some lessons can be used as advocacy material and can be useful for initiating policy dialogue.
Social mobilization is the movement of a group of people on the socio-economic scale over time, for example, winning the lottery when living in
poverty would rise your socio-economic status to a more wealthier and comfortable position.
UNICEF defines social Mobilization, as it is a broad Scale movement to engage People’s Participation in achieving a specific development goal
through self-reliant effort.
Social Mobilization is the process of getting people especially rural poor, organized in order to improve their own situation.
The philosophy is “Helping people to help themselves”
Social mobilisation is the process of bringing together all stakeholders to raise people’s awareness of and demand for a particular programme
(health etc.), to assist in the delivery of resources and services and to strengthen community participation for sustainability and self-reliance.
Social mobilisation recognizes that sustainable social and behavioural change requires many levels of involvement—from individual to
community to policy and legislative action. Isolated efforts cannot have the same impact as collective ones.
Community mobilisation is the process of engaging communities to identify community priorities, resources, needs and solutions in such a way
as to promote representative participation, good governance, accountability and peaceful change. Sustained mobilisation takes place when
communities remain active and empowered after the programme ends.
Community mobilizing is when experts drive the action of an issue and they are the ones who know the solutions. Community mobilizing is
categorized as issue oriented, its process is driven by action, and it can be a confrontational process. On the other hand community organizing is
when issues arise out of a community consensus. This process is goal oriented and not confrontational because everyone agrees that this issue
exists and is important. A hybrid of both community mobilizing and community organizing efforts are both crucial if a coalition wants to achieve
real outcomes.
Difference between SM & CM
Social mobilization means mentally setting of human beings for achieving a goal, while community mobilization means a specific group of
people readiness to achieve the desire levels of goals and aims.
Social organization is the way a group of people interact in their community, for example, how they divide up power and access to
resources and specific goods.
THEME
Social organization is a method through which a fabric of different components of society are made to achieve optimum
utilization of human and other resources.
PRACTICES
Social organization is a method through which a group of people is organized to foster the process of development and ensure
community participation on sustainable basis.
Most Significant Change is an approach that collects a series of stories from program participants that are
analyzed in successive rounds by stakeholder groups to emerge with the most significant or meaningful
examples of changes brought about during the program.
The most significant change (MSC) technique is a form of participatory monitoring and evaluation. It is
participatory because many project stakeholders are involved both in deciding the sorts of change to be
recorded and in analysing the data. It is a form of monitoring because it occurs throughout the program cycle
and provides information to help people manage the program. It contributes to evaluation because it provides
data on impact and outcomes that can be used to help assess the performance of the program as a whole.
Essentially, the process involves the collection of significant change (SC) stories emanating from the field level,
and the systematic selection of the most significant of these stories by panels of designated stakeholders or
staff. The designated staff and stakeholders are initially involved by ‘searching’ for project impact. Once
changes have been captured, various people sit down together, read the stories aloud and have regular and
often in-depth discussions about the value of these reported changes. When the technique is implemented
successfully, whole teams of people begin to focus their attention on program impact.
Most Significant Change
Governance
Governance in general relates to the process of decision-making and how those decisions are implemented.
Accountability is an essential characteristic of good governance, where leaders are accountable for their decisions to people
affected by those decisions. When these processes are institutionalized they become a system of government. Governance is good
when it is accountable, transparent, just, responsive and participatory. Good governance is a goal of community mobilization, plus
condition for all development initiatives to be sustainable.
Triangulation.
Data collection from three different sources about the same subject. This is considered the best way to ensure
that our information is valid. For example, if we want to know about the effects of a community mobilization
project, we might collect data via 1) interviews with key participants, including our own staff 2) a document
review to understand exactly what services were delivered and in what amounts 3) focus groups and/or a survey
of project participants. This helps us avoid the natural biases of any one method of data collection. Although
three different sources are not always possible, the primary point is to avoid reliance on a single source or
perspective.
Targets.
Sometimes called “milestones” or “benchmarks”, these tell us what we plan to achieve at specific points in the
life of our projects or programs. We use them to monitor our progress toward completion of our activities.
Formal education is classroom-based, provided by trained teachers.
Learning that occurs in an organised and structured environment (such as in an education or training institution or on the
job) and is explicitly designated as learning (in terms of objectives, time or resources). Formal learning is intentional from
the learner’s point of view. It typically leads to certification. Earning that occurs in an organised and structured context (in
a school/training centre or on the job) and is explicitly designated as learning (in terms of objectives, time or learning
support). Formal learning is intentional from the learner’s point of view. It typically leads to certification.” “What students
are taught from the syllabus.”
Non-formal Education
“Learning resulting from daily activities related to work, family or leisure. It is not organised or structured in terms of
objectives, time or learning support. Informal learning is in most cases unintentional from the learner’s perspective.
Comments: Informal learning outcomes may be validated and certified;
Informal learning is also referred to as experiential or incidental/random learning.”
Non-formal education has an adopted strategy where the student attendance is not fully required. The educative
progress in non-formal education has a more flexible curricula and methodology. The activities or lessons of the non-
formal education take place outside the institutions or schools. Here the needs and interest of the students are taken
into consideration. There are 2 features in the non-formal education that need to be constant: Centralization of the
process on the student, as to his previously identified needs and possibilities;
The immediate usefulness of the education for the student’s personal and professional growth.
Because of the importance of the interests and needs of the students, this form of education meets the individual needs
better. Non-formal education is focused on the student and this will have as result that the student participates more.
When the needs of the students change the non-formal education can react quicker because of its flexibility
Non-formal learning is a loosely defined term covering various structured learning situations, such as swimming sessions
for toddlers, community-based sports programs and conference style seminars, which do not either have the level of
curriculum, syllabus, accreditation and certification associated with 'formal
Informal education happens outside the classroom, in after-school programs, community-based organizations,
museums, libraries, or at home. “Informal education is that learning which goes on outside of a formal learning
environment such as a school, a college or a university, therefore it is learning outside of the classroom/lecture
theatre; however more can be said by way of providing a definition of the term. Informal education can be seen as
‘learning that goes on in daily life’, and/or ‘learning projects that we undertake for ourselves“(Smith, 2009).
“learning that goes on in daily life and can be received from daily experience, such as from family, peer groups, the
media and other influences in a person’s environment” (Oñate, 2006). “It encompasses a huge variety of activities: it
could be a dance class at a church hall, a book group at a local library, cookery skills learnt in a community centre, a
guided visit to a nature reserve or stately home, researching the National Gallery collection on-line, writing a
Wikipedia entry or taking part in a volunteer project to record the living history of [a] particular community.” (DIU&S,
2009: p4).
Protection encompasses all activities aimed at ensuring full respect for the rights of the individual in accordance with human
rights law, international humanitarian law (which applies in situations of armed conflict) and refugee law.1
•a legal or other formal measure intended to preserve civil liberties and rights.
•Child protection is the protection of children from violence, exploitation, abuse and neglect. Article 19 of the UN
Convention on the Rights of the Child provides for the protection of children in and out of the home.
Cluster sampling refers to a type of sampling method . With cluster sampling, the researcher divides the population into
separate groups, called clusters. Then, a simple random sample of clusters is selected from the population.
For example, a researcher wants to survey academic performance of high school students in Spain. He can divide the entire
population (population of Spain) into different clusters (cities). Then the researcher selects a number of clusters depending
on his research through simple or systematic random sampling.
Sampling
In statistics, quality assurance, and survey methodology, sampling is concerned with the selection of a subset of individuals from
within a statistical population to estimate characteristics of the whole population. ... In business and medical research, sampling is
widely used for gathering information about a population. In statistics, sampling allows you to test a hypothesis about the
characteristics of a population.
Sampling is the process of selecting units (e.g., people, organizations) from a population of interest so that by studying the sample
we may fairly generalize our results back to the population from which they were chosen. ... Probability Sampling. No probability
Sampling.
Lot Quality Assurance Sampling ( LQAS) Fundaments • LQAS is used for the monitoring of ( mainly health) programme coverage indicators,
based on a stratified simple random sample of a small number of geographical units per stratum, also called a ‘lot’.• It is seen as a good
alternative to more complex and often more costly sampling techniques.• The method is particularly suitable for frequently conducted
monitoring surveys on programme coverage and other performance indicators in settings that do not require a high level of statistical
precision.• LQAS tests whether a given threshold value is achieved or not, rather than producing estimates for an indicator, although
different‘lots’ can be combined in order to estimate overall programme performance in terms of coverage.
Types of sampling: sampling methods
Sampling in market research is of two types – probability sampling and non-probability sampling. Let’s take a closer look at these two methods of sampling.
1.Probability sampling: Probability sampling is a sampling technique where a researcher sets a selection of a few criteria and chooses members of a population
randomly. All the members have an equal opportunity to be a part of the sample with this selection parameter. And/or researchers can calculate the probability
of any single person in the population being selected for the study. These studies provide greater mathematical precision and analysis.
2.Non-probability sampling: In non-probability sampling, the researcher chooses members for research at random. This sampling method is not a fixed or
predefined selection process. This makes it difficult for all elements of a population to have equal opportunities to be included in a sample. Or/and researchers
cannot calculate the probability of being in the study for individuals within the population These samples tend to be less accurate and less representative of the
larger population.
Probability sampling is a sampling technique in which researchers choose samples from a larger population
using a method based on the theory of probability.This sampling method considers every member of the
population and forms samples based on a fixed process.
For example, in a population of 1000 members, every member will have a 1/1000 chance of being selected to
selected to be a part of a sample. Probability sampling eliminates sampling bias in the population and gives all
members a fair chance to be included in the sample.
There are four types of probability sampling techniques:
Random Sampling/Simple Random Sample (SRS)
Under Random sampling, every element of the population has an equal probability of
getting selected. It is a reliable method of obtaining information where every single
member of a population is chosen randomly, merely by chance. Below fig. shows the
pictorial view of the same — All the points collectively represent the entire population
wherein every point has an equal chance of getting selected. E.g. lottery Method,
Table of random numbers
Stratified Sampling (it is a variation of random sample) strata
means group
Under stratified sampling, we group the entire population into
subpopulations by some common property. For example — Class
labels in a typical ML classification task. We then randomly sample
from those groups individually, such that the groups are still
maintained in the same ratio as they were in the entire population.
Below fig. shows a pictorial view of the same — We have two groups
with a count ratio of x and 4x based on the colour, we randomly
sample from yellow and green sets separately and represent the final
set in the same ratio of these groups. Professionals may divide strata
into categories, including: Age, Gender, Income, & Profession etc.
Cluster Sampling
In Cluster sampling, we divide the entire population into subgroups, wherein,
each of those subgroups has similar characteristics to that of the population when
considered in totality. Also, instead of sampling individuals, we randomly select
the entire subgroups. As can be seen in the below fig. that we had 4 clusters with
similar properties (size and shape), we randomly select two clusters and treat them
as samples. Real-Life example — Class of 120 students divided into groups of 12
for a common class project. Clustering parameters like (Designation, Class,
Topic) are all similar over here as well. For example, you could examine the dining
habits of residents in a certain state. You can divide these. residents into clusters
based on the county they live in and then use a random sampling method to select
eight counties for the study. Cluster sampling differs from strata sampling because
some clusters are unrepresented in the final sample, whereas researchers use
members from every stratum in stratified sampling
Systematic/Quasi Random Sampling
Systematic sampling is about sampling items from the population at regular predefined intervals(basically fixed and periodic
intervals). This type of sampling method has a predefined range, and hence this sampling technique is the least time-consuming.
For example, you can compile a list of 250 individuals in a population and use every fifth person as a study participant. For example
— Every 5th element, 21st element and so on. This sampling method tends to be more effective than the vanilla random sampling
method in general. Below fig. shows a pictorial view of the same — We sample every 9th and 7th element in order and then repeat
this pattern. Systematic sampling aims to eliminate bias and can be easier to achieve than random sampling. However, systematic
sampling differs from simple random sampling because the systematic method doesn’t offer the same probability of being chosen
for every member of a population.
Multistage sampling
Under Multistage sampling, we stack multiple sampling methods one after the other.
For example, at the first stage, cluster sampling can be used to choose clusters from
the population and then we can perform random sampling to choose elements from
each cluster to form the final set. Below fig. shows a pictorial view of the same —
Multistage sampling occurs when you use different sampling methods at different
stages of the same study. This method is helpful for large population sizes. For
instance, consider determining how much support a new government initiative has
across the country. It's not practical to list every person in the country, so you may start
by creating clusters in stage one for each state or geographic region, like southwest,
southeast, northeast and northwest. In the next stage, you may further divide these
clusters into strata and choose random samples from each stratum.
Bias in sampling
Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than
others.There are five important potential sources of bias that should be considered when selecting a sample,
irrespective of the method used. Sampling bias may be introduced when:1
1.Any pre-agreed sampling rules are deviated from
2.People in hard-to-reach groups are omitted
3.Selected individuals are replaced with others, for example if they are difficult to contact
4.There are low response rates
5.An out-of-date list is used as the sample frame (for example, if it excludes people who have recently moved
to an area)
Sampling definitions:
•Population
The total number of people or things you are interested in
•Sample
A smaller number within your population that will represent the
whole
•Sampling
Sampling error is any type of bias
that is attributable to mistakes
in either drawing a sample or
determining the sample size.
In non-probability sampling, the sample is selected based on non-random criteria, or subjective and not every member of the population has a
chance of being included. The non-probability method is a sampling method that involves a collection of feedback based on a researcher or statistician’s
sample selection capabilities and not on a fixed selection process. In most situations, the output of a survey conducted with a non-probable sample leads to
skewed results, which may not represent the desired target population. But, there are situations such as the preliminary stages of research or cost
constraints for conducting research, where non-probability sampling will be much more useful than the other type. Common non-probability sampling
methods include convenience sampling, voluntary response sampling, purposive/judgemental sampling, snowball sampling, and quota sampling.
Convenience Sampling (accidental/ haphazard sample)
Under convenience sampling, the researcher includes only
those individuals who are most accessible and available to
participate in the study. Below fig. shows the pictorial view of
the same — Blue dot is the researcher and orange dots are the
most accessible set of people in orange’s vicinity. researchers
use random people as testing subjects. For example, a
researcher may sample a group of people walking by on a street.
In this case, the researcher has no control of the sample group
itself. This type of sampling is both cost and time efficient as
researchers can gather people to sample relatively quickly.
Voluntary Sampling
Under Voluntary sampling, interested people usually take
part by themselves by filling in some sort of survey forms.
A good example of this is the youtube survey about “Have
you seen any of these ads”, which has been recently
shown a lot. Here, the researcher who is conducting the
survey has no right to choose anyone. Below fig. shows
the pictorial view of the same — Blue dot is the researcher,
orange one’s are those who voluntarily agreed to take part
in the study.
Snowball Sampling
Under Snowball sampling, the final set is chosen via other
participants, i.e. The researcher asks other known contacts to
find people who would like to participate in the study. Below fig.
shows the pictorial view of the same — Blue dot is the researcher,
orange ones are known contacts(of the researcher), and yellow
ones (orange’s contacts) are other people that got ready to
participate in the study. And/or Researchers use snowball
sampling when the study involves sampling groups of people who
are more difficult to gather. To gather people to survey,
researchers may ask the test subjects they do have to contact and
nominate others to take part in the study. While this can be an
effective way to gather participants, it makes the factors of the test
group more difficult to control. And/or If the population is hard to
access, snowball sampling can be used to recruit participants via
other participants. The number of people you have access to
“snowballs” as you get in contact with more people. And/or
Snowball sampling is also known as a chain-referral sampling technique
Quota sampling involves researchers creating a sample based on predefined traits. For example, the researcher might gather a group
of people who are all aged 65 or older. This allows researchers to easily gather data from a specific demographic. And/or the selection
members in this sampling technique happens based on a pre-set standard. In this case, as a sample is formed based on specific attributes, the
created sample will have the same qualities found in the total population. It is a rapid method of collecting samples. The quota sampling is classified into
two different types, such as: Controlled Quota Sampling, Uncontrolled Quota Sampling
Controlled Quota Sampling:
If the sampling imposes restrictions on the researcher’s/Statisticians choice of sample, then it is known as controlled quota sampling. In this method, the researcher can
be able to select the limited samples.
Uncontrolled Quota Sampling:
If the sampling does not impose any restrictions on the researcher’s/Statisticians choice of sample, then it is known as uncontrolled quota sampling. In this process, the
researcher can select the samples of their interest.
Judgmental/Purposive /Authoritative/selective/subjective Sampling
In judgmental sampling, the chosen research subjects are solely at the discretion of the
researcher. In this case, the researcher is responsible for picking individuals who they feel
would be a positive addition to the study. To create their sample, researchers may ask
prospective individuals a few questions relating to the study and then decide based on thei
answers. And/or the processes whereby the researcher selects a sample based on
experience or knowledge of the group to be sampled….called “Judgment” sampling. It is
often used in qualitative research, where the researcher wants to gain detailed knowledge
about a specific phenomenon rather than make statistical inferences, or where the
population is very small and specific. An effective purposive sample must have clear criteria
and rationale for inclusion.
Convenience sampling occurs when researchers choose
respondents based on elements of convenience, such as being
near respondents or being close friends with respondents. For
instance, a survey conductor may poll people at a nearby park.
Convenience sampling is easier and cheaper than random
sampling, but you cannot generalize the results, which makes it
less reliable. Researchers have nearly no authority to select the
sample elements, and it’s purely done based on proximity and
not representativeness. This non-probability sampling method
is used when there are time and cost limitations in collecting
feedback. In situations where there are resource limitations
such as the initial stages of research, convenience sampling is
used.
Voluntary response sampling refers to soliciting responses from volunteers. Unlike other studies, participants select themselves rather than being selected
by those carrying out the research. For instance, a teaching assistant may send an evaluation survey via email asking for feedback on their performance.
Voluntary response sampling is typically unrepresentative and not random, as only respondents with strong opinions are likely to participate.
Difference between probability sampling and non-
probability sampling methods
How to choose the correct sample size
Finding the best sample size for your target population is something you’ll
need to do again and again, as it’s different for every study.
To make life easier, we’ve provided a sample size calculator. To use it, you
need to know your
•Population size
•Confidence level
•Margin of error (confidence interval)
If any of those terms are unfamiliar, have a look at our blog post
on determining sample size for details of what they mean and how to find
Important terminologies...
Population
The population refers to the entire group of people, events
or things of interest that the researcher wishes to
investigate.
Universe the larger group from which the individuals are
selected to participate in the Study
Element
An element is the
single member of the population.
Sample
A sample is a subset of the population. it comprises some
members from it./or the representatives selected for a
study whose characteristics exemplify the larger group from
which they were selected
Sampling Unit
The sample unit is the element or the set of elements that
is available for selection in some stage of the sampling
process.
Subject
A subject is a single member of the sample just as an
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx
Definations for Learning 24 July 2022 [Autosaved].pptx

More Related Content

Similar to Definations for Learning 24 July 2022 [Autosaved].pptx

Monitoring and Evaluation for development and governmental organizations.pdf
Monitoring and Evaluation for development and governmental organizations.pdfMonitoring and Evaluation for development and governmental organizations.pdf
Monitoring and Evaluation for development and governmental organizations.pdfGutaMengesha1
 
Monitoring and evaluation1
Monitoring and evaluation1Monitoring and evaluation1
Monitoring and evaluation1PCPD Palestine
 
Monitoring and evaluation presentatios
Monitoring and evaluation presentatios Monitoring and evaluation presentatios
Monitoring and evaluation presentatios athanzeer
 
evaluation of deped proj,prog and activi
evaluation of deped proj,prog and activievaluation of deped proj,prog and activi
evaluation of deped proj,prog and activiMei Miraflor
 
A Good Program Can Improve Educational Outcomes.pdf
A Good Program Can Improve Educational Outcomes.pdfA Good Program Can Improve Educational Outcomes.pdf
A Good Program Can Improve Educational Outcomes.pdfnoblex1
 
Evaluation approaches presented by hari bhusal
Evaluation approaches presented by hari bhusalEvaluation approaches presented by hari bhusal
Evaluation approaches presented by hari bhusalHari Bhushal
 
Program Rationale and Logic for Post Monitoring
Program Rationale and Logic for Post MonitoringProgram Rationale and Logic for Post Monitoring
Program Rationale and Logic for Post MonitoringThabang Nare
 
Learning from Evaluation
Learning from EvaluationLearning from Evaluation
Learning from EvaluationOlivier Serrat
 
COMMUNITY EVALUATION 2023.pptx
COMMUNITY  EVALUATION 2023.pptxCOMMUNITY  EVALUATION 2023.pptx
COMMUNITY EVALUATION 2023.pptxgggadiel
 
Almm monitoring and evaluation tools draft[1]acm
Almm monitoring and evaluation tools draft[1]acmAlmm monitoring and evaluation tools draft[1]acm
Almm monitoring and evaluation tools draft[1]acmAlberto Mico
 
evaluation-guidance and councseling
evaluation-guidance and councselingevaluation-guidance and councseling
evaluation-guidance and councselingNairobiShah
 
The field of program evaluation presents a diversity of images a.docx
The field of program evaluation presents a diversity of images a.docxThe field of program evaluation presents a diversity of images a.docx
The field of program evaluation presents a diversity of images a.docxcherry686017
 

Similar to Definations for Learning 24 July 2022 [Autosaved].pptx (20)

Monitoring and Evaluation for development and governmental organizations.pdf
Monitoring and Evaluation for development and governmental organizations.pdfMonitoring and Evaluation for development and governmental organizations.pdf
Monitoring and Evaluation for development and governmental organizations.pdf
 
Monitoring and evaluation1
Monitoring and evaluation1Monitoring and evaluation1
Monitoring and evaluation1
 
Monitoring and evaluation presentatios
Monitoring and evaluation presentatios Monitoring and evaluation presentatios
Monitoring and evaluation presentatios
 
COURSEWORK.pdf
COURSEWORK.pdfCOURSEWORK.pdf
COURSEWORK.pdf
 
Assignment
Assignment Assignment
Assignment
 
evaluation of deped proj,prog and activi
evaluation of deped proj,prog and activievaluation of deped proj,prog and activi
evaluation of deped proj,prog and activi
 
A Good Program Can Improve Educational Outcomes.pdf
A Good Program Can Improve Educational Outcomes.pdfA Good Program Can Improve Educational Outcomes.pdf
A Good Program Can Improve Educational Outcomes.pdf
 
Evaluation approaches presented by hari bhusal
Evaluation approaches presented by hari bhusalEvaluation approaches presented by hari bhusal
Evaluation approaches presented by hari bhusal
 
Program Rationale and Logic for Post Monitoring
Program Rationale and Logic for Post MonitoringProgram Rationale and Logic for Post Monitoring
Program Rationale and Logic for Post Monitoring
 
Project Monitoring and Evaluation (M and E Plan) Notes
Project Monitoring and Evaluation (M and E Plan) NotesProject Monitoring and Evaluation (M and E Plan) Notes
Project Monitoring and Evaluation (M and E Plan) Notes
 
Curriculum monitoring
Curriculum monitoringCurriculum monitoring
Curriculum monitoring
 
Day 1
Day 1Day 1
Day 1
 
Project Management .pdf
Project Management .pdfProject Management .pdf
Project Management .pdf
 
Project monitoring
Project monitoringProject monitoring
Project monitoring
 
Learning from Evaluation
Learning from EvaluationLearning from Evaluation
Learning from Evaluation
 
COMMUNITY EVALUATION 2023.pptx
COMMUNITY  EVALUATION 2023.pptxCOMMUNITY  EVALUATION 2023.pptx
COMMUNITY EVALUATION 2023.pptx
 
Program Evaluation 1
Program Evaluation 1Program Evaluation 1
Program Evaluation 1
 
Almm monitoring and evaluation tools draft[1]acm
Almm monitoring and evaluation tools draft[1]acmAlmm monitoring and evaluation tools draft[1]acm
Almm monitoring and evaluation tools draft[1]acm
 
evaluation-guidance and councseling
evaluation-guidance and councselingevaluation-guidance and councseling
evaluation-guidance and councseling
 
The field of program evaluation presents a diversity of images a.docx
The field of program evaluation presents a diversity of images a.docxThe field of program evaluation presents a diversity of images a.docx
The field of program evaluation presents a diversity of images a.docx
 

More from InayatUllah780749

More from InayatUllah780749 (10)

fraenkel4_ppt_ch06.ppt
fraenkel4_ppt_ch06.pptfraenkel4_ppt_ch06.ppt
fraenkel4_ppt_ch06.ppt
 
5682508.ppt
5682508.ppt5682508.ppt
5682508.ppt
 
5280282.ppt
5280282.ppt5280282.ppt
5280282.ppt
 
stagecoachletstalk22.pptx
stagecoachletstalk22.pptxstagecoachletstalk22.pptx
stagecoachletstalk22.pptx
 
NCPT 1.3 PP AAP in Hum Coord.pptx
NCPT 1.3 PP AAP in Hum Coord.pptxNCPT 1.3 PP AAP in Hum Coord.pptx
NCPT 1.3 PP AAP in Hum Coord.pptx
 
WHS RSG NSEA Consultation_FR.ppt
WHS RSG NSEA Consultation_FR.pptWHS RSG NSEA Consultation_FR.ppt
WHS RSG NSEA Consultation_FR.ppt
 
Using laws, standards and principles.ppt
Using laws, standards and principles.pptUsing laws, standards and principles.ppt
Using laws, standards and principles.ppt
 
The Cluster Approach Presentation.ppt
The Cluster Approach Presentation.pptThe Cluster Approach Presentation.ppt
The Cluster Approach Presentation.ppt
 
Ecological_Lecture_2006Web.ppt
Ecological_Lecture_2006Web.pptEcological_Lecture_2006Web.ppt
Ecological_Lecture_2006Web.ppt
 
ResearchSampling.pptx
ResearchSampling.pptxResearchSampling.pptx
ResearchSampling.pptx
 

Recently uploaded

Panet vs.Plastics - Earth Day 2024 - 22 APRIL
Panet vs.Plastics - Earth Day 2024 - 22 APRILPanet vs.Plastics - Earth Day 2024 - 22 APRIL
Panet vs.Plastics - Earth Day 2024 - 22 APRILChristina Parmionova
 
(官方原版办理)BU毕业证国外大学毕业证样本
(官方原版办理)BU毕业证国外大学毕业证样本(官方原版办理)BU毕业证国外大学毕业证样本
(官方原版办理)BU毕业证国外大学毕业证样本mbetknu
 
Greater Noida Call Girls 9711199012 WhatsApp No 24x7 Vip Escorts in Greater N...
Greater Noida Call Girls 9711199012 WhatsApp No 24x7 Vip Escorts in Greater N...Greater Noida Call Girls 9711199012 WhatsApp No 24x7 Vip Escorts in Greater N...
Greater Noida Call Girls 9711199012 WhatsApp No 24x7 Vip Escorts in Greater N...ankitnayak356677
 
Call Girl Benson Town - Phone No 7001305949 For Ultimate Sexual Urges
Call Girl Benson Town - Phone No 7001305949 For Ultimate Sexual UrgesCall Girl Benson Town - Phone No 7001305949 For Ultimate Sexual Urges
Call Girl Benson Town - Phone No 7001305949 For Ultimate Sexual Urgesnarwatsonia7
 
##9711199012 Call Girls Delhi Rs-5000 UpTo 10 K Hauz Khas Whats Up Number
##9711199012 Call Girls Delhi Rs-5000 UpTo 10 K Hauz Khas  Whats Up Number##9711199012 Call Girls Delhi Rs-5000 UpTo 10 K Hauz Khas  Whats Up Number
##9711199012 Call Girls Delhi Rs-5000 UpTo 10 K Hauz Khas Whats Up NumberMs Riya
 
Jewish Efforts to Influence American Immigration Policy in the Years Before t...
Jewish Efforts to Influence American Immigration Policy in the Years Before t...Jewish Efforts to Influence American Immigration Policy in the Years Before t...
Jewish Efforts to Influence American Immigration Policy in the Years Before t...yalehistoricalreview
 
Enhancing Indigenous Peoples' right to self-determination in the context of t...
Enhancing Indigenous Peoples' right to self-determination in the context of t...Enhancing Indigenous Peoples' right to self-determination in the context of t...
Enhancing Indigenous Peoples' right to self-determination in the context of t...Christina Parmionova
 
Start Donating your Old Clothes to Poor People kurnool
Start Donating your Old Clothes to Poor People kurnoolStart Donating your Old Clothes to Poor People kurnool
Start Donating your Old Clothes to Poor People kurnoolSERUDS INDIA
 
Call Girls Bangalore Saanvi 7001305949 Independent Escort Service Bangalore
Call Girls Bangalore Saanvi 7001305949 Independent Escort Service BangaloreCall Girls Bangalore Saanvi 7001305949 Independent Escort Service Bangalore
Call Girls Bangalore Saanvi 7001305949 Independent Escort Service Bangalorenarwatsonia7
 
High Class Call Girls Mumbai Tanvi 9910780858 Independent Escort Service Mumbai
High Class Call Girls Mumbai Tanvi 9910780858 Independent Escort Service MumbaiHigh Class Call Girls Mumbai Tanvi 9910780858 Independent Escort Service Mumbai
High Class Call Girls Mumbai Tanvi 9910780858 Independent Escort Service Mumbaisonalikaur4
 
2024: The FAR, Federal Acquisition Regulations - Part 27
2024: The FAR, Federal Acquisition Regulations - Part 272024: The FAR, Federal Acquisition Regulations - Part 27
2024: The FAR, Federal Acquisition Regulations - Part 27JSchaus & Associates
 
Cunningham Road Call Girls Bangalore WhatsApp 8250192130 High Profile Service
Cunningham Road Call Girls Bangalore WhatsApp 8250192130 High Profile ServiceCunningham Road Call Girls Bangalore WhatsApp 8250192130 High Profile Service
Cunningham Road Call Girls Bangalore WhatsApp 8250192130 High Profile ServiceHigh Profile Call Girls
 
Take action for a healthier planet and brighter future.
Take action for a healthier planet and brighter future.Take action for a healthier planet and brighter future.
Take action for a healthier planet and brighter future.Christina Parmionova
 
Russian Call Girl Hebbagodi ! 7001305949 ₹2999 Only and Free Hotel Delivery 2...
Russian Call Girl Hebbagodi ! 7001305949 ₹2999 Only and Free Hotel Delivery 2...Russian Call Girl Hebbagodi ! 7001305949 ₹2999 Only and Free Hotel Delivery 2...
Russian Call Girl Hebbagodi ! 7001305949 ₹2999 Only and Free Hotel Delivery 2...narwatsonia7
 
Premium Call Girls Btm Layout - 7001305949 Escorts Service with Real Photos a...
Premium Call Girls Btm Layout - 7001305949 Escorts Service with Real Photos a...Premium Call Girls Btm Layout - 7001305949 Escorts Service with Real Photos a...
Premium Call Girls Btm Layout - 7001305949 Escorts Service with Real Photos a...narwatsonia7
 
VIP Call Girls Doodh Bowli ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Doodh Bowli ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...VIP Call Girls Doodh Bowli ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Doodh Bowli ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...Suhani Kapoor
 
productionpost-productiondiary-240320114322-5004daf6.pptx
productionpost-productiondiary-240320114322-5004daf6.pptxproductionpost-productiondiary-240320114322-5004daf6.pptx
productionpost-productiondiary-240320114322-5004daf6.pptxHenryBriggs2
 
Call Girls Service AECS Layout Just Call 7001305949 Enjoy College Girls Service
Call Girls Service AECS Layout Just Call 7001305949 Enjoy College Girls ServiceCall Girls Service AECS Layout Just Call 7001305949 Enjoy College Girls Service
Call Girls Service AECS Layout Just Call 7001305949 Enjoy College Girls Servicenarwatsonia7
 
VIP High Profile Call Girls Gorakhpur Aarushi 8250192130 Independent Escort S...
VIP High Profile Call Girls Gorakhpur Aarushi 8250192130 Independent Escort S...VIP High Profile Call Girls Gorakhpur Aarushi 8250192130 Independent Escort S...
VIP High Profile Call Girls Gorakhpur Aarushi 8250192130 Independent Escort S...Suhani Kapoor
 

Recently uploaded (20)

Panet vs.Plastics - Earth Day 2024 - 22 APRIL
Panet vs.Plastics - Earth Day 2024 - 22 APRILPanet vs.Plastics - Earth Day 2024 - 22 APRIL
Panet vs.Plastics - Earth Day 2024 - 22 APRIL
 
(官方原版办理)BU毕业证国外大学毕业证样本
(官方原版办理)BU毕业证国外大学毕业证样本(官方原版办理)BU毕业证国外大学毕业证样本
(官方原版办理)BU毕业证国外大学毕业证样本
 
Greater Noida Call Girls 9711199012 WhatsApp No 24x7 Vip Escorts in Greater N...
Greater Noida Call Girls 9711199012 WhatsApp No 24x7 Vip Escorts in Greater N...Greater Noida Call Girls 9711199012 WhatsApp No 24x7 Vip Escorts in Greater N...
Greater Noida Call Girls 9711199012 WhatsApp No 24x7 Vip Escorts in Greater N...
 
Call Girl Benson Town - Phone No 7001305949 For Ultimate Sexual Urges
Call Girl Benson Town - Phone No 7001305949 For Ultimate Sexual UrgesCall Girl Benson Town - Phone No 7001305949 For Ultimate Sexual Urges
Call Girl Benson Town - Phone No 7001305949 For Ultimate Sexual Urges
 
##9711199012 Call Girls Delhi Rs-5000 UpTo 10 K Hauz Khas Whats Up Number
##9711199012 Call Girls Delhi Rs-5000 UpTo 10 K Hauz Khas  Whats Up Number##9711199012 Call Girls Delhi Rs-5000 UpTo 10 K Hauz Khas  Whats Up Number
##9711199012 Call Girls Delhi Rs-5000 UpTo 10 K Hauz Khas Whats Up Number
 
Jewish Efforts to Influence American Immigration Policy in the Years Before t...
Jewish Efforts to Influence American Immigration Policy in the Years Before t...Jewish Efforts to Influence American Immigration Policy in the Years Before t...
Jewish Efforts to Influence American Immigration Policy in the Years Before t...
 
Enhancing Indigenous Peoples' right to self-determination in the context of t...
Enhancing Indigenous Peoples' right to self-determination in the context of t...Enhancing Indigenous Peoples' right to self-determination in the context of t...
Enhancing Indigenous Peoples' right to self-determination in the context of t...
 
Start Donating your Old Clothes to Poor People kurnool
Start Donating your Old Clothes to Poor People kurnoolStart Donating your Old Clothes to Poor People kurnool
Start Donating your Old Clothes to Poor People kurnool
 
Call Girls Bangalore Saanvi 7001305949 Independent Escort Service Bangalore
Call Girls Bangalore Saanvi 7001305949 Independent Escort Service BangaloreCall Girls Bangalore Saanvi 7001305949 Independent Escort Service Bangalore
Call Girls Bangalore Saanvi 7001305949 Independent Escort Service Bangalore
 
High Class Call Girls Mumbai Tanvi 9910780858 Independent Escort Service Mumbai
High Class Call Girls Mumbai Tanvi 9910780858 Independent Escort Service MumbaiHigh Class Call Girls Mumbai Tanvi 9910780858 Independent Escort Service Mumbai
High Class Call Girls Mumbai Tanvi 9910780858 Independent Escort Service Mumbai
 
2024: The FAR, Federal Acquisition Regulations - Part 27
2024: The FAR, Federal Acquisition Regulations - Part 272024: The FAR, Federal Acquisition Regulations - Part 27
2024: The FAR, Federal Acquisition Regulations - Part 27
 
Cunningham Road Call Girls Bangalore WhatsApp 8250192130 High Profile Service
Cunningham Road Call Girls Bangalore WhatsApp 8250192130 High Profile ServiceCunningham Road Call Girls Bangalore WhatsApp 8250192130 High Profile Service
Cunningham Road Call Girls Bangalore WhatsApp 8250192130 High Profile Service
 
Take action for a healthier planet and brighter future.
Take action for a healthier planet and brighter future.Take action for a healthier planet and brighter future.
Take action for a healthier planet and brighter future.
 
Russian Call Girl Hebbagodi ! 7001305949 ₹2999 Only and Free Hotel Delivery 2...
Russian Call Girl Hebbagodi ! 7001305949 ₹2999 Only and Free Hotel Delivery 2...Russian Call Girl Hebbagodi ! 7001305949 ₹2999 Only and Free Hotel Delivery 2...
Russian Call Girl Hebbagodi ! 7001305949 ₹2999 Only and Free Hotel Delivery 2...
 
Model Town (Delhi) 9953330565 Escorts, Call Girls Services
Model Town (Delhi)  9953330565 Escorts, Call Girls ServicesModel Town (Delhi)  9953330565 Escorts, Call Girls Services
Model Town (Delhi) 9953330565 Escorts, Call Girls Services
 
Premium Call Girls Btm Layout - 7001305949 Escorts Service with Real Photos a...
Premium Call Girls Btm Layout - 7001305949 Escorts Service with Real Photos a...Premium Call Girls Btm Layout - 7001305949 Escorts Service with Real Photos a...
Premium Call Girls Btm Layout - 7001305949 Escorts Service with Real Photos a...
 
VIP Call Girls Doodh Bowli ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Doodh Bowli ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...VIP Call Girls Doodh Bowli ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
VIP Call Girls Doodh Bowli ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With R...
 
productionpost-productiondiary-240320114322-5004daf6.pptx
productionpost-productiondiary-240320114322-5004daf6.pptxproductionpost-productiondiary-240320114322-5004daf6.pptx
productionpost-productiondiary-240320114322-5004daf6.pptx
 
Call Girls Service AECS Layout Just Call 7001305949 Enjoy College Girls Service
Call Girls Service AECS Layout Just Call 7001305949 Enjoy College Girls ServiceCall Girls Service AECS Layout Just Call 7001305949 Enjoy College Girls Service
Call Girls Service AECS Layout Just Call 7001305949 Enjoy College Girls Service
 
VIP High Profile Call Girls Gorakhpur Aarushi 8250192130 Independent Escort S...
VIP High Profile Call Girls Gorakhpur Aarushi 8250192130 Independent Escort S...VIP High Profile Call Girls Gorakhpur Aarushi 8250192130 Independent Escort S...
VIP High Profile Call Girls Gorakhpur Aarushi 8250192130 Independent Escort S...
 

Definations for Learning 24 July 2022 [Autosaved].pptx

  • 2. Note: The extent to which other interventions (particularly policies) support or undermine the intervention and vice versa. This includes internal coherence and external coherence. Internal coherence addresses the synergies and interlinkages between the intervention and other interventions carried out by the same institution/government, as well as the consistency of the intervention with the relevant international norms and standards to which that institution/government adheres. External coherence considers the consistency of the intervention with other actors’ interventions in the same context. This includes complementarity, harmonisation and co-ordination with others, and the extent to which the intervention is adding value while avoiding duplication of effort. Coherence is connected in particular with relevance, effectiveness and impact. While relevance assesses the intervention at the level of the needs and priorities of the stakeholders and beneficiaries that are directly involved, coherence goes up to the next level and looks at the fit of the intervention within the broader system. Both relevance and coherence consider how the intervention aligns with the context, but they do so from different perspectives. Coherence is often a useful angle through which to begin examining unintended effects, which can be captured under effectiveness and impact. While the intervention may achieve its objectives (effectiveness) these gains may be reversed by other (not coherent) interventions in the context. Likewise there are links with efficiency: incoherent interventions may be duplicative, thus wasting resources. Challenges of evaluating coherence and how to address them: The table identifies several of the key challenges when evaluating coherence – including challenges related to breadth of scope, mandate and data availability. Suggestions are made for ways of addressing them for both evaluators and evaluation managers. Coherence added because – to better capture linkages, systems thinking, partnership dynamics, and complexity. (Eng meaning: the quality of being logical and consistent. 2. the quality of forming a unified whole
  • 3.
  • 4. Monitoring • observe and check the progress or quality of (something)… • Monitoring is the continuous process of collecting and analyzing data (indicators), with a view to identifying any need for corrective actions to ensure Project execution towards attaining its Objective. • A continuing function that uses systematic collection of data on specific indicators to provide management and the main stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives and progress on the use of allocated funds. (DAC) • The basis for a quality management system is the Plan, Lead ,Do, Check, Act concept, where check is the monitoring part of the concept.
  • 5. • Monitoring is the systematic and continuous collecting, analysis and using of information for the purpose of management and decision-making, it therefore represents an exhaustive and regular examination of the resources, outputs and results of a project, it is a continuous process carried out during the execution of a project with the intention of immediately correcting any deviation from operational objectives. Monitoring generates data that can be used in evaluations. • Monitoring: Regularly collecting, reviewing, reporting and acting on information about project implementation. Monitoring is generally used to check our performance against ‘targets’ as well as to ensure compliance with donor regulations. (Mercy Corps DM&E Guidelines).
  • 6. Evaluation Evaluation is a periodic assessment of the efficiency, effectiveness, impact, sustainability and relevance of a project in the context of stated objectives. It is usually undertaken as an independent examination of the background, objectives, results, activities and means deployed, with a view to drawing lessons that may guide future decision-making, it therefore seeks to determine as systematically and objectively as possible the relevance, efficiency and effect of a project in terms of its objectives. • Evaluation is an in-depth, retrospective analysis of a specific aspect (or aspects) of a project that occurs at a single point in time. Evaluation is generally more focused and intense than monitoring and often uses more time-consuming techniques such as surveys, focus groups, interviews and workshops.
  • 7. Evaluation • The systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfillment of objectives, development efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision–making process of both recipients and donors. Evaluation also refers to the process of determining the worth or significance of an activity, policy or program. An assessment, as systematic and objective as possible, of a planned, on-going, or completed development intervention. Note: Evaluation in some instances Involves the definition of appropriate standards, the examination of performance against those standards, an assessment of actual and expected results and the identification of relevant lessons. (DAC) • Evaluations tell us if our projects are achieving our goals, reaching our target groups, if we’re effective and sustainable, and if our project design and activities are appropriate. They produce recommendations and lessons learned that help us improve.
  • 8. Evaluation of an undertaking at different points in time An investment case, a process or a project is typically divided into three distinct phases. In the beginning, an idea and decision phase lasts until the final decision to implement is made. The implementation phase follows, continuing until the project’s outputs are realized. The goal could be to build a building, to reorganize an organization, or to have a student pass a final exam. Finally, there is an operational phase, in which the benefits of the project are realized or revenue comes in.
  • 9.
  • 10.
  • 11. Types of Evaluation Stage of Project Purpose Types of Evaluation Conceptualization Phase Helps prevent waste and identify potential areas of concerns while increasing chances of success. Formative Evaluation Implementation Phase Optimizes the project, measures its ability to meet targets, and suggest improvements for improving efficiency. Process Evaluation Outcome Evaluation Economic Evaluation Project Closure Phase Insights into the project’s success and impact, and highlight potential improvements for subsequent projects. impact Evaluation Summative Evaluation Goals-Based Evaluation Experience indicates that today, most evaluation activities occur in the implementation phase or just after its conclusion, options designated interim evaluation and final evaluation, respectively. This is puzzling because the implementation phase is the period in which the project is least likely to benefit from an evaluation (Samset 2003). An interim evaluation can help avoid or correct mistakes during a project, that is, it provides management information. A final evaluation assesses the results at the conclusion of the implementation phase, that is, it provides control information.
  • 12. Type Meaning/Purpose When What Why How Questions to ask: Formative Evaluation (also known as ‘evaluability assessment’) Formative evaluation is used before program design or implementation. It generates data on the need for the program and develops the baseline for subsequent monitoring. It also identifies areas of improvement and can give insights on what the program’s priorities should be. This helps project managers determine their areas of concern and focus, and increases awareness of your program among the target population prior to launch. A formative evaluation aims to improve and refine an existing project. (MEAL Dpro) An evaluation conducted to improve performance, most often during implementation phase of projects or programs. Note: formative evaluation may also be conducted for other reason such as compliance, legal requirements or as a part of larger evaluation initiative (OECD) New program developmen t Program expansion It’s carried out early in project implementat ion, up to the midpoint. The need for your project among the potential beneficiaries The current baseline of relevant indicators, which can help show impact later Helps make early improvements to the program Allows project managers to refine or improve the program Conduct sample surveys and focus group discussions among the target population focused on whether they are likely to need, understand, and accept program elements. Is there a need for the program? What can do to improve it? Process Evaluation (also known as ‘program monitoring’) Process evaluation occurs once program implementation has begun, and it measures how effective your program’s procedures are. The data it generates is useful in identifying inefficiencies and streamlining processes, and portrays the program’s status to external parties. A process evaluation aims to understand how well a project is being implemented (or was implemented), particularly if you want to replicate or enlarge your response. When program implementa tion begins During operation of an existing program It’s carried out during project implementa tion (often at the midpoint) or at the end. Whether program goals and strategies are working as they should Whether the program is reaching its target population, and what they think about it Provides an opportunity to avoid problems by spotting them early Allows program administrators to determine how well the program is working Conduct a review of internal reports and a survey of program managers and a sample of the target population. The aim should be to measure the number of participants, how long they have to wait to receive benefits, and what their experience has been. Who is being reached by the program? How the program is being implemented and what are the gaps? Is it meeting targets? Outcome Outcome evaluation is conventionally used during program implementation. It generates data on the program’s outcomes and •After the program How much the program has affected the target population •Helps program administrators tell A randomized controlled trial, •Did participants report the desired
  • 13. Type Meaning/Purpose When What Why How Questions to ask: Economic Evaluation (also known as ‘cost analysis’, ‘cost-effectiveness evaluation’, ‘cost- benefit analysis’, and ‘cost-utility analysis’) Economic evaluation is used during the program’s implementation and looks to measure the benefits of the programs against the costs. Doing so generates useful quantitative data that measures the efficiency of the program. This data is like an audit, and provides useful information to sponsors and backers who often want to see what benefits their money would bring to beneficiaries. •At the beginning of a program, to remove potential leakages •During the operation of a program, to find and remove inefficiencies. •What resources are being spent and where •How these costs are translating into outcomes •Program managers and funders can justify or streamline costs •The program can be modified to deliver more results at lower costs A systematic analysis of the program by collecting data on program costs, including capital and man-hours of work. It will also require a survey of program officers and the target population to determine potential areas of waste. •Where is the program spending its resources? •What are the resulting outcomes? Impact Evaluation Impact evaluation studies the entire program from beginning to end (or at whatever stage the program is at), and looks to quantify whether or not it has been successful. Focused on the long-term impact, impact evaluation is useful for measuring sustained changes brought about by the program or making policy changes or modifications to the program. An impact or outcome evaluation aims to assess how well a project met its goal to produce change. Impact evaluations can use rigorous data collection and analysis and control groups. •At the end of the program •At pre-selected intervals in the program end of a project. It requires baseline data be gathered at the beginning of implementation and regular, rigorous monitoring activities. •Assesses the change in the target population’s well-being •Accounts for what would have happened if there had been no program •To show proof of impact by comparing beneficiaries with control groups •Provides insights to help in making policy and funding decisions A macroscopic review of the program, coupled with an extensive survey of program participants, to determine the effort involved and the impact achieved. Insights from program officers and suggestions from program participants are also useful, and a control group of non- participants for comparison is helpful. •What changes in program participants’ lives are attributable to your program? •What would those not participating in the program have missed out on? Summative Evaluation Summative evaluation is conducted after the program’s completion or at the end of a program cycle. It generates data about how well the project delivered benefits to the target population. It is useful for program administrators to justify the •At the end of a program •At the end of a program cycle •How effectively the program made the desired change happen •How the program •Provides data to justify continuing the program •Generates insights into the effectiveness and Conduct a review of internal reports and a survey for program managers •Should the program continue to be funded? •Should the
  • 14. Type Meaning/Purpose When What Why How Questions to ask: Goals-Based Evaluation (also known as ‘objectively set evaluation) Goals-based evaluation is usually done towards the end of the program or at previously agreed-upon intervals. Development programs often set ‘SMART’ targets — Specific, Measurable, Attainable, Relevant, and Timely — and goals-based evaluation measures progress towards these targets. The evaluation is useful in presenting reports to program administrators and backers, as it provides them the information that was agreed upon at the start of the program. •At the end of the program •At pre-decided milestones •How the program has performed on initial metrics •Whether the program has achieved its goals •To show that the program is meeting its initial benchmarks •To review the program and its progress This depends entirely on the goals that were agreed upon. Usually, goals-based evaluation would involve some survey of the participants to measure impact, as well as a review of input costs and efficiency. •Has the program met its goals? •Were the goals and objectives achieved due to the program or externalities? Ex post- Sustained and Emerging Impacts Evaluation (SEIE) ex-post evaluation Sustained and Emerging Impacts Evaluation (SEIE) refers to an evaluation that focuses on impacts some time after the end of an intervention (which might be a project, policy or group of projects or programs) or after the end of participants’ involvement in an ongoing intervention. It examines the extent to which intended impacts have been sustained as well as what unintended impacts have emerged over time (positive and negative). It is most commonly done for interventions with a finite timeframe, such as projects funded or implemented by international development agencies, multilateral organizations or philanthropic foundations, or local projects funded for a period of time by a national government. The timing for SEIE evaluation should depend on the expected change trajectory. It needs to be long enough afterwards to be able to see some change from initial impacts, but not so long that it is hard to collect data and not possible to use the information to inform decisions. SEIE provides the “missing piece” of evaluation in the project/progra mme cycle, as shown in the following diagram, developed Ex Ante Evaluation Ex ante evaluation is a broad initial assessment aimed at identifying which alternative will yield the greatest benefit from an intended investment. More commonly, considerable resources are used on detailed planning of a single, specific Ex-Ante Evaluation is performed before implementation of development intervention. (DAC Def)
  • 15. Type Meaning/Purpose Potential Disadvantages of Participatory and Empowerment Evaluation Participatory Evaluation Two approaches are particularly useful when framing an evaluation of community engagement programs; both engage stakeholders. In one, the emphasis is on the importance of participation; in the other, it is on empowerment. The first approach, participatory evaluation, actively engages the community in all stages of the evaluation process Participatory evaluation can help improve program performance by (1) involving key stakeholders in evaluation design and decision making, (2) acknowledging and addressing asymmetrical levels of power and voice among stakeholders, (3) using multiple and varied methods, (4) having an action component so that evaluation findings are useful to the program’s end users, and (5) explicitly aiming to build the evaluation capacity of stakeholders Characteristics of Participatory evaluation The focus is on participant ownership; the evaluation is oriented to the needs of the program stakeholders rather than the funding agency. Participants meet to communicate and negotiate to reach a consensus on evaluation results, solve problems, and make plans to improve the program. Input is sought and recognized from all participants. The emphasis is on identifying lessons learned to help improve program implementation and determine whether targets were met. The evaluation design is flexible and determined (to the extent possible) during the group processes. The evaluation is based on empirical data to determine what happened and why. Stakeholders may conduct the evaluation with an outside expert serving as a facilitator. Two approaches are particularly useful when framing an evaluation of community engagement programs; both engage stakeholders. In one, the emphasis is on the importance of participation; in the other, it is on empowerment. The potential disadvantages of participatory and empowerment evaluation include (1) the possibility that the evaluation will be viewed as less objective because of stakeholder involvement, (2) difficulties in addressing highly technical aspects, (3) the need for time and resources when involving an array of stakeholders, and (4) domination and misuse by some stakeholders to further their own interests. However, the benefits of fully engaging stakeholders throughout the evaluation outweigh these concerns (Fetterman et al., 1996). Empowerment evaluation The second approach (, empowerment evaluation, helps to equip program personnel with the necessary skills to conduct their own evaluation and ensure that the program runs effectively. This section describes the purposes and characteristics of the two approaches. Empowerment evaluation is a stakeholder involvement approach designed to provide groups with the tools and knowledge they need to monitor and evaluate their own performance and accomplish their goals. It is also used to help groups accomplish their goals. Empowerment evaluation focuses on fostering self-determination and sustainability. It is particularly suited to the evaluation of comprehensive community–based initiatives or place-based initiatives. A number of theories guide empowerment evaluation: •Empowerment theory focuses on gaining control of resources in one’s environment. It also provides a guide for the role of the empowerment evaluator. •Self-determination theory highlights specific mechanisms or behaviors that enable the actualization of empowerment. •Process use cultivates ownership by placing the approach in community and staff members’ hands. •Theories of use and action explain how empowerment evaluation helps people “walk their talk” and produce desired results. Values improvement in people, programs, and organizations to help them achieve results. Community ownership of the design and conduct of the evaluation and implementation of the findings. Inclusion of appropriate participants from all levels of the program, funders, and community. Democratic participation and clear and open Characteristics of empowerment evaluation Evaluation plans and methods. Commitment to social justice and a fair allocation of resources, opportunities, obligations, and bargaining power. Use of community knowledge to understand the local context and to interpret results. Use of evidence-based strategies with adaptations to the local environment and culture. Building the capacity of program staff and participants to improve their ability to conduct their own evaluations. Organizational learning, ensuring that programs are responsive to changes and challenges. Accountability to funders’ expectations.
  • 16. Type Meaning/Purpose Others Type of Evaluations Why How Questions to ask: Meta Evaluation A meta-evaluation is an instrument used to aggregate findings from a series of evaluations. It also involves an evaluation of the quality of this series of evaluations and its adherence to established good practice in evaluation. A meta-analysis provides more robust results that can help psychology researchers better understand the magnitude of an effect. A meta-analysis provides important conclusions and trends that can influence future research, policy-makers' decisions, and how patients receive care. A meta evaluation is a systematic and formal evaluation of evaluations. It examines the methods used within an evaluation or set of evaluations to bolster the credibility of findings. This type is often used in policy-making settings. It’s carried out external to project implementation cycle. Cluster Evaluation: An evaluation of set of related activities, projects and/or programs External Evaluation: The evaluation of development intervention conducted by entities and/or individuals outside the donor or implementation organizations. (OECD) Independent Evaluation: An evaluation carried out by entities or persons free of control of those responsible for the design and implementation of the project. Note. The credibility of evaluation depends in part depends on how independently it has been carried out. Independently implies freedom from political influence and organizational pressure. It is characterized by full access to information and by full autonomy in carrying out investigations and reporting findings (OECD) Internal Evaluation: Joint Evaluation Mid-term evaluation: Program Evaluation Project Evaluation: Sector Program Evaluation: Self-Evaluation: Evaluability assessments (carried out before an evaluation begins) can be useful in setting realistic expectations for what information the evaluation can provide, what evidence can be gathered, and how the evaluation will answer questions. Focused Evaluation e.g. KAP survey Utilization- Focused Evaluation Utilization-Focused Evaluation (UFE), developed by Michael Quinn Patton, is an approach based on the principle that an evaluation should be judged on its usefulness to its intended users. Therefore evaluations should be planned and conducted in ways that enhance the likely utilization of both the findings and of the process itself to inform decisions and improve performance. UFE has two essential elements. Firstly, the primary intended users of the evaluation must be clearly identified and personally engaged at the beginning of the evaluation process to ensure that their primary intended uses can be identified. Secondly, evaluators must ensure that these intended uses of the evaluation by the primary intended users guide all other decisions that are made about the evaluation process. Rather than a focus on general and abstract users and uses, UFE is focused on real and specific users and
  • 17. Type Meaning/Purpose Guide What Why How Questions to ask: Principles-Focused Evaluation In Principles-Focused Evaluation (PFE), the principles that guide an initiative are evaluated. This approach was created by evaluator Michael Quinn Patton for initiatives guided primarily by principles. PFE is an approach to evaluation, not a specific set of steps. It can look very different depending on the initiative, the principles, the context, and the people conducting the evaluation. Principles are statements of how human beings should act that apply in a wide variety of situations. There are two main types of principles: Moral Principles and Effectiveness Principles. Moral Principles tell human beings how they should act for moral reasons. For example, you may hold the principle, “Intervene when a human rights violation is occurring” because you believe that upholding a person’s human rights is the moral thing to do. Effectiveness Principles tell human beings how they should act in order to be effective at achieving a certain outcome. For example, you may hold the principle, “Intervene when a human rights violation is occurring” because you believe that this is the most effective way to protect a person’s human rights. For a Principles-Focused Evaluation, principles need to include statements of how to behave. The six principles of Human Rights (Universality, Indivisibility, Participation, Accountability, Transparency and Non-Discrimination) can be used as both moral and effectiveness principles. They can be evaluated in a Principles- Focused Evaluation if they are made into clear statements of how to behave. Please see Appendix A for examples of these statements for Human Rights Principles. Principles-focused evaluation informs choices about which principles are appropriate for what purposes in which contexts, helping to navigate the treacherous terrain of conflicting guidance and competing advice. What principles work for what situations with what results is an evaluation question. Thus, from an evaluation perspective, principles are hypotheses not truths. They may or may not work. They may or may not be followed. They may or may not lead to desired outcomes. Whether they work, whether they are followed, and whether they yield desired outcomes are subject to evaluation. Learning to evaluate principles, and applying what you learn from doing so, takes on increasing importance in an ever more complex world where our effectiveness depends on adapting to The GUIDE framework for principles describes a well- defined principle as: Guiding (providing guidance) Useful Inspiring Developmental (supportive of ongoing learning, growth, and adaptation) Evaluable (able to be evaluated) You can use this framework as a tool for defining principles. For example, if we state a campaign principle of non-discrimination: “Work to reduce inequities in power and resources”, you could use the GUIDE criteria to test it. You could do this by asking member of your campaign, “Is this principle providing our campaign guidance? Is it useful? Is it inspiring? Does it support us in learning, growing and adapting? Is the principle clear enough that we can evaluate it?
  • 18. Type Meaning/Purpose When What Why How Questions to ask: Developmental Evaluation or Development, Development evaluation Developmental Evaluation (DE) is an evaluation approach that can assist social innovators develop social change initiatives in complex or uncertain environments. DE originators liken their approach to the role of research & development in the private sector product development process because it facilitates real-time, or close to real- time, feedback to program staff thus facilitating a continuous development loop. Michael Quinn Patton is careful to describe this approach as one choice that is responsive to context. This approach is not intended as the solution to every situation. Development evaluation is particularly suited to innovation, radical program re-design, replication, complex issues, crises In these situations, DE can help by: framing concepts, test quick iterations, tracking developments, surfacing issues. This description is from Patton (2010) Developmental Evaluation. Applying Complexity Concepts to Enhance Innovation and Use... "Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems is uncertain and key stakeholders are in conflict about how to proceed." Developmental evaluation helps an organisation to generate rapid learning to support the direction of the development of a program, and/or affirm the need for a change of course. DE provides real-time feedback so that the program stakeholders can implement new measures and actions as goals emerge and evolve.
  • 19. Potential Disadvantages of Participatory and Empowerment Evaluation The potential disadvantages of participatory and empowerment evaluation include (1) the possibility that the evaluation will be viewed as less objective because of stakeholder involvement, (2) difficulties in addressing highly technical aspects, (3) the need for time and resources when involving an array of stakeholders, and (4) domination and misuse by some stakeholders to further their own interests. However, the benefits of fully engaging stakeholders throughout the evaluation outweigh these concerns (Fetterman et al., 1996).
  • 20. A basic monitoring system consists of: • Identified objects and indicators to be examined pertaining to input, including expenditures, Output, outcome and impact (What). • Methods/means of verification and frequency of data collecting concerning the indicators (how, when, by whom?); • Processing and analysis of the data; and • Defining corrective actions.
  • 21.
  • 22. M&E plan • M&E plan, which describes how the whole M&E system for the program works, including things like who is responsible for it, what forms and tools will be used, how the data will flow through the organisation, and who will make decisions using the data. • In other organisations the whole M&E plan is called an M&E framework (as if things weren’t confusing enough!). http://www.tools4dev.org/resources/monitoring-evaluation-plan- template/
  • 23. What is an M&E system? • As with many things in international development, the precise definition of an M&E system varies between different organisations. • In most cases an M&E system refers to all the indicators, tools and processes that you will use to measure if a program has been implemented according to the plan (monitoring) and is having the desired result (evaluation). • An M&E system is often described in a document called an M&E plan. An M&E framework is one part of that plan.
  • 24. Monitoring and evaluation (M&E) framework • there is no standard definition of a Monitoring and Evaluation (M&E) framework, or how it differs from an M&E plan. • For many organisations, an M&E framework is a table that describes the indicators that are used to measure whether the program is a success. • The M&E framework then becomes one part of the M&E plan, which describes how the whole M&E system for the program works, including things like who is responsible for it, what forms and tools will be used, how the data will flow through the organisation, and who will make decisions using the data. In other organisations the whole M&E plan is called an M&E framework (as if things weren’t confusing enough!). Note: An M&E framework can also be called an evaluation matrix.
  • 25. (M&E) Framework Example INDICATOR DEFINITION How is it calculated? BASELINE What is the current value? TARGET What is the target value? DATA SOURCE How will it be measured? FREQUENCY How often will it be measured? RESPONSIBLE Who will measure it? REPORTING Where will it be reported? Goal Percentage of Grades 6 primary students continuing on to high school. Number students who start the first day of Grade 7 divided by the total number of Grade 6 students in the previous year, multiplied by 100. 50% 60% Primary and high school enrolment records. Annual Program manager Annual enrolment report Outcomes Reading proficiency among children in Grade 6. Sum of all reading proficiency test scores for all students in Grade 6 divided by the total number of students in Grade 6. Average score: 47 Average score: 57 Reading proficiency tests using the national assessment tool. Every 6 months Teachers 6 monthly teacher reports Outputs Number of students who completed a summer reading camp. Total number of students who were present on both the first and last day of the summer reading camp. 0 500 Summer camp attendance records. End of every camp Teachers Camp review report Number of parents of children in Grade 6 who helped their children read at home in the last week. Total number of parents who answered “yes” to the question “Did you help your child read at home any time in the last week?” 0 500 Survey of parents. End of every camp Program officer Survey report
  • 26. Ultimate Impact End Outcomes Intermediate Outcomes Outputs Interventions Needs-based Higher Consequence Specific Problem Cause Solution Process Inputs CARE terminology Program Impact Project Impact Effects Outputs Activities Inputs CARE logframe Program Goal Project Final Goal Intermediate Objectives Outputs Activities Inputs PC/LogFrame Goal Purpose Outputs Activities USAID Results Framework Strategic Objective Intermediate Results Outputs Activities Inputs USAID Logframe Final Goal Strategic Goal/ Objective Intermediate results Activities 202E DANIDA + DfID Goal Purpose Outputs Activities CIDA + GTZ Overall goal Project purpose Results/outputs Activities Inputs European Union Overall Objective Project Purpose Results Activities FAO + UNDP + NORAD Development Objective Immediate Objectives Outputs Activities Inputs UNHCR Sector Objective Goal Project Objective Outputs Activities Input/Resources World Bank Long-term Objectives Short-term Objectives Outputs Inputs AusAID Scheme Goal Major Development Objectives Outputs Activities Inputs COMPARISONS BETWEEN TERMINOLOGIES OF DIFFERENT DONOR AGENCIES for RESULTS / LOGICAL FRAMEWORKS Compiled by Jim Rugh for CARE International and InterAction’s Evaluation Interest Group
  • 27. Part of monitoring procedures is the Project Review. • Project review is a specific formal examination of Project implementation towards attaining its Objective, as part of the Project monitoring Activities. • Real Time Reviews • Real Time Reviews are often considered as a type of evaluation and it has the same objectives goals i.e. to see if we are achieving our goals etc. Real Time Reviews are conducted while the response / programmes are still on-going and it aims to inform the next phase of the response / programmes as well as future response / programmes. Often this is done after the first phase of the response / programme - between 3-6 months in. Review
  • 28. Theory of Change • Theory of change’ is an outcomes-based approach which applies critical thinking to the design, implementation and evaluation of initiatives and programmes intended to support change in their contexts. • The description of a sequence of events that is expected to lead to a particular desired outcome • ‘Theory of change is an on-going process of reflection to explore change and how it happens - and what that means for the part we play in a particular context, sector and/or group of people. • ‘Theory of change is a dynamic, critical thinking process, it makes the initiative clear and transparent - it underpins strategic planning. It is developed in a participatory way over time, following a logical structure that is rigorous and specific, and that can meet a quality test by the stakeholder. The terminology is not important, it is about buying into the critical thinking.’ Helene Clark, Act Knowledge
  • 29.
  • 30. Theory of Change Logical Framework • In practice, a Theory of Change typically: • Gives the big picture, including issues related to the environment or context that you can’t control. • Shows all the different pathways that might lead to change, even if those pathways are not related to your program. • Describes how and why you think change happens. • Could be used to complete the sentence “if we do X then Y will change because…”. • Is presented as a diagram with narrative text. • The diagram is flexible and doesn’t have a particular format – it could include cyclical processes, feedback loops, one box could lead to multiple other boxes, different shapes could be used, etc. • Describes why you think one box will lead to another box (e.g. if you think increased knowledge will lead to behaviour change, is that an assumption or do you have evidence to show it is the case?). • Is mainly used as a tool for program design and evaluation. • In practice, a Logical Framework: • Gives a detailed description of the program showing how the program activities will lead to the immediate outputs, and how these will lead to the outcomes and goal (the terminology used varies by organisation). • Could be used to complete the sentence “we plan to do X which will give Y result”. • Is normally shown as a matrix, called a logframe. It can also be shown as a flow chart, which is sometimes called a logic model. • Is linear, which means that all activities lead to outputs which lead to outcomes and the goal – there are no cyclical processes or feedback loops. • Includes space for risks and assumptions, although these are usually only basic. Doesn’t include evidence for why you think one thing will lead to another. • Is mainly used as a tool for monitoring.
  • 31. Indicators Defined • An indicator provides evidence that a certain condition exists or certain results have or have not been achieved (Brizius & Campbell, p.A-15). • Indicators enable decision-makers to assess progress towards the achievement of intended outputs, outcomes, goals, and objectives. As such, indicators are an integral part of a results-based accountability system. Signals that show whether the standard has been attained  Tools to measure and communicate the impact or result  May be qualitative or quantitative
  • 33. Management Management is the planning, organizing, leading, and controlling of resources to achieve organizational goals effectively and efficiently. • Planning is choosing appropriate organizational goals and the correct directions to achieve those goals. • Organizing involves determining the tasks and the relationships that allow employees to work together to achieve the planned goals. • With leading, managers motivate and coordinate employees to work together to achieve organizational goals. • When controlling, managers monitor and measure the degree to which the organization has reached its goals Management is the process of efficiently and effectively acquiring, developing, protecting, and utilizing organizational resources in the pursuit of organizational goals.
  • 34. Social Cohesion • The OECD [1] defines social cohesion as: A cohesive society works towards the well-being of all its members, fights exclusion and marginalisation, creates a sense of belonging, promotes trust, and offers its members the opportunity of upward mobility. While the notion of ‘social cohesion’ is often used with different meanings, its constituent elements include concerns about social inclusion, social capital and social mobility. Some of these elements can be quantified, and some countries have taken steps to develop suitable metrics in this field, e.g. through specific surveys assessing different aspects of people’s social connections and civic engagement. However, most researchers define cohesion to be task commitment and interpersonal attraction to the group.[3][4] Cohesion can be more specifically defined as the tendency for a group to be in unity while working towards a goal or to satisfy the emotional needs of its members.[4] • EU council define : the capacity of a society to ensure the well-being of all its members – minimising disparities and avoiding marginalization – to manage differences and divisions and ensure the means of achieving welfare for all members. It is a political concept. • Social cohesion is a dynamic process and is essential for achieving social justice, democratic security and sustainable development. Divided and unequal societies are not only unjust, they also cannot guarantee stability in the long term. In a cohesive society the well-being of all is a shared goal that includes the aim of ensuring adequate resources are available to combat inequalities and exclusion.
  • 35. WDR’s framework [2] on social cohesion emphasises the way societies and groups manage possible collective action problems arising from economic and social transformations in and around national labor markets.
  • 36. Planning • Planning is a way to organize actions that will hopefully lead to the fulfillment of a goal. • A plan is a way to organize actions that will lead to the fulfillment of a goal by providing direction and an approach to follow. • How? By providing clear direction and an approach to follow--giving a method to your madness, so to speak.
  • 37. WHY SHOULD YOU DEVELOP A PLAN? • To make your life easier, of course. But, more specifically: • To help you map out how to get from point A to point B • To make sure that u work in more efficient and effective. A plan is important because it focuses on the set of steps you will need to go through to achieve your ultimate goal of recruiting members. The planning stage is the time to decide what actions the organization will take to achieve its goal.
  • 38. Development • The cumulative and lasting increase, tied to social changes in the quantity and quality of a community’s goods, services and resources, with the purpose of maintaining and improving the quality and security of human life. • But to define development as an improvement in people's well-being does not do justice to what the term means to most of us. Development also carries a connotation of lasting change.
  • 39. Sustainable Development • Development that meets the needs of the present without compromising the ability of future generations to meet their own needs. • It contains within it two key concepts: the concept of "needs", in particular the essential needs of the world's poor, to which overriding priority should be given; and the idea of limitations imposed by the state of technology and social organization on the environment's ability to meet present and the future needs. (Brundtland Commission, 1987).
  • 40.
  • 41. An Overview of Strategic Planning or "VMOSA"
  • 42. What is VMOSA? • Vision • Mission • Objectives • Strategies • Action Plans
  • 43. What is VMOSA? • A practical strategic planning tool. • A blueprint for moving from dreams to actions to outcomes. • An ongoing process.
  • 44. Why use VMOSA? • To give your organization structure and direction. • To help build consensus about what to do and how to do it. • To focus your organization's efforts.
  • 45. When to use V M O S A • New organization. • New initiative or large project. • New phase of ongoing effort. • Breathe life into an older initiative.
  • 46. Vision Statements: The dream • Examples: • "Healthy adolescents" • "Healthy babies" • "Caring parents" • "A community of hope" • "Safe sex" • "Teen power" • "Caring relationships"
  • 47. Mission: The what and why Examples: • To build a healthy community through a comprehensive initiative to promote jobs, education, and housing. • To promote adolescent health and development through school and community support and prevention.
  • 48. Objectives: The how much of what will be accomplished by when Examples: • By 2005, increase by 40% the number of adults who report caring activities with a child not their own. • By 2015, decrease by 25% the number of reported cases of child abuse and neglect.
  • 49. Strategies: The how • Examples: • Enhance experience and competence. • Remove barriers. • Increase support and resources. • Make outcomes matter.
  • 50. Action Plan: The specifics of who will do what, by when, at what costs • These consist of: • Action steps (what will be done). • People responsible (by whom). • Date completed (by when). • Resources required (costs). • Collaborators (who should know).
  • 51. The Sphere Project  A process that began in 1997 to address concerns of quality and accountability in humanitarian responses  Humanitarian Charter that emphasizes the “right to life with dignity”  Minimum Standards in Disaster Response Water, sanitation and hygiene promotion Food security, nutrition and food aid Shelter, settlement and non-food items Health services www.sphereproject.org
  • 52. What is a natural hazard vs a disaster?  A natural hazard is a natural phenomenon that can potentially trigger a disaster  Examples include earthquakes, mud-slides, floods, volcanic eruptions, tsunamis, drought  These physical events need not necessarily result in disaster  A disaster is a serious disruption of the functioning of a community or a society involving widespread human, material, economic or environmental losses and impacts, exceeding the ability of the community to cope using own resources
  • 53. The product of hazards over which we have no control. It combines:  the likelihood or probability of a disaster happening  the negative effects that result if the disaster happens –these are increased by vulnerabilities (characteristics/circumstances that make one susceptible to damaging effects of a hazard) –and decreased by capacities (combination of strengths, attitudes and resources) Acceptable Risk The level of loss a society or community considers it can live with and for which it does not need to invest in mitigation Risk assessment/ Analysis A methodology to determine the nature and extent of risk by Analyzing potential hazards and evaluating existing vulnerability that could pose a potential threat to people, property, livelihoods and the environment What is risk?
  • 54. Emergency Management The management and deployment of resources for dealing with all aspects of emergencies, in particularly preparedness, response and rehabilitation. Disaster Risk Management (DRM) The comprehensive approach to reduce the adverse impacts of a disaster. It encompasses all actions taken before, during, and after the disasters. It includes activities on mitigation, preparedness, emergency response, recovery, rehabilitation, and reconstruction. Disaster Risk Reduction (disaster reduction) The measures aimed to minimize vulnerabilities and disaster risks throughout a society, to avoid (prevention) or to limit (mitigation and preparedness) the adverse impacts of hazards, within the broad context of sustainable development.
  • 55. Prevention: outright avoidance of the adverse affects of hazards / disasters OR Activities to ensure complete avoidance of the adverse impact of hazards Mitigation: the process of lessoning or limiting the adverse affects of hazards / disasters OR Structural and nonstructural measures undertaken to limit the adverse impact of natural hazards, environmental degradation and technological hazards. Preparedness: knowledge and capacities to effectively anticipate, respond to and recover from impacts of likely hazard. Activities and measures taken in advance to ensure effective response to the impact of hazards, including the issuance of timely and effective early warnings and the temporary evacuation of people and property from threatened locations. Risk Reduction: practice of reducing risks through systematic efforts to analyze and manage the causal factors of disasters, including through reduced exposure, lessened vulnerability, improved preparedness Response: provision of emergency services to save lives, meet needs Terminology
  • 56. Natural Hazards Natural processes or phenomena occurring on the earth that may constitute a damaging event. Natural hazards can be classified by origin namely: geological, hydro meteorological or biological. Hazardous events can vary in magnitude or intensity, frequency, duration, area of extent, speed of onset, spatial dispersion and temporal spacing Hazard Analysis Identification, studies and monitoring of any hazard to determine its potential, origin, characteristics and behaviour. Early Warning The provision of timely and effective information, through identified institutions to communities and individuals so that they can take action to reduce their risk and prepare for effective response. Climate Change The climate of a place or region is changed if over an extended period (typically decades or longer) there is a statistically significant change in measurements of either the mean state or variability of the climate for that region.
  • 57. Hazard A potentially damaging physical event or phenomenon that may cause the loss of life or injury, property damage, social and economic disruption or environmental degradation. Hazards can include natural (geological, hydro meteorological and biological) or induced by human processes (environmental degradation and technological hazards). Hazards can be single, sequential or combined in their origin and effects. Each hazard is characterized by its location, intensity, frequency and probability. In general, less developed countries are more vulnerable to natural hazards than are industrialized countries because of lack of understanding, education, infrastructure, building codes, etc. Poverty also plays a role - since poverty leads to poor building structure, increased population density, and lack of communication and infrastructure Vulnerability to Hazards and Disasters Vulnerability refers the way a hazard or disaster will affect human life and property Vulnerability to a given hazard depends on: Proximity to a possible hazardous event Population density in the area proximal to the event Scientific understanding of the hazard Public education and awareness of the hazard Existence or non-existence of early-warning systems and lines of communication Availability and readiness of emergency infrastructure Construction styles and building codes Cultural factors that influence public response to warnings Risk and vulnerability can sometimes be reduced if there is an adequate means of predicting a hazardous event. E.g. prediction, early warning system, forecasting etc.
  • 58. Development and habitation of lands susceptible to hazards, For example, building on floodplains subject to floods, sea cliffs subject to landslides, coastlines subject to hurricanes and floods, or volcanic slopes subject to volcanic eruptions. Increasing the severity or frequency of a natural disaster. For example: overgrazing or deforestation leading to more severe erosion (floods, landslides), mining groundwater leading to subsidence, construction of roads on unstable slopes leading to landslides, or even contributing to global warming, leading to more severe storms. Affluence can also play a role, since affluence often controls where habitation takes place, for example along coastlines, or on volcanic slopes. Affluence also likely contributes to global warming, since it is the affluent societies that burn the most fossil fuels adding CO2 to the atmosphere.
  • 59. Assessing Hazards and Risk Hazard Assessment and Risk Assessment are 2 different concepts! Hazard Assessment consists of determining the following when and where hazardous processes have occurred in the past. the severity of the physical effects of past hazardous processes (magnitude). the frequency of occurrence of hazardous processes. the likely effects of a process of a given magnitude if it were to occur now. and, making all this information available in a form useful to planners and public officials responsible for making decisions in event of a disaster. Risk Assessment involves not only the assessment of hazards from a scientific point of view, but also the socio-economic impacts of a hazardous event. Risk is a statement of probability that an event will cause x amount of damage or a statement of the economic impact in monetary terms that an event will cause. Risk assessment involves hazard assessment, as above, location of buildings, highways, and other infrastructure in the areas subject to hazards potential exposure to the physical effects of a hazardous situation the vulnerability of the community when subjected to the physical effects of the event. Risk assessment aids decision makers and scientists to compare and evaluate potential hazards, set priorities on what kinds of mitigation are possible, and set priorities on where to focus resources and further study.
  • 60.
  • 61. Poverty Deprivation Life circumstances can include… Poor housing and homelessness
  • 62. Community infrastructure primarily refers to small scale basic structures, technical facilities and systems built at the community level that are critical for sustenance of lives and livelihoods of the population living in a community. Community infrastructure is an integral sub-sector of the infrastructure sector. These are low-cost small-scale infrastructures built over time through community-led initiatives according to the needs and aspirations of the community population. These micro infrastructures are socially, economically and operationally linked with community lives and livelihood options, ensure basic services to its population and are thus conceived as critical lifelines for survival of the community; However, drawing a line between main infrastructure and community infrastructure is not easy, and a globally accepted definition for community infrastructure does not yet exist Community infrastructure
  • 63. Behavior change communication (BCC) is an interactive process of any intervention with individuals, communities and/or societies (as integrated with an overall program) to develop communication strategies to promote positive behaviors which are appropriate to their settings. Behavior change communication (BCC)
  • 64. The Conceptual Framework: How BCC Works Community infrastructure
  • 65. Community infrastructure primarily refers to small scale basic structures, technical facilities and systems built at the community level that are critical for sustenance of lives and livelihoods of the population living in a community. Community infrastructure is an integral sub-sector of the infrastructure sector. These are low-cost small-scale infrastructures built over time through community-led initiatives according to the needs and aspirations of the community population. These micro infrastructures are socially, economically and operationally linked with community lives and livelihood options, ensure basic services to its population and are thus conceived as critical lifelines for survival of the community; However, drawing a line between main infrastructure and community infrastructure is not easy, and a globally accepted definition for community infrastructure does not yet exist Community infrastructure
  • 66. Differences between quantitative and qualitative research methods.
  • 67. Differences between quantitative and qualitative research methods.
  • 68. Definitions Evaluation; is the systematic and objective assessment of an ongoing or completed project, program or policy focusing on design, implementation and results.  Also defined as the systematic application of social research procedures for assessing the conceptualisation, design, implementation and utility of programs. Research; systematic investigation designed to develop or contribute to generalised knowledge; includes developing, testing and evaluating the hypothesis. Evaluation research A type of applied research in which one tries to determine how well a program or policy is working or reaching its goals and objectives.
  • 69. To understand the concept of Program Evaluation, it is essential to know its difference with Research, as most often these two concepts are confused between each other. A common question that is raised – is research and evaluation same or different? Unfortunately, there is a no easy answer for this. There has been alternative theorisation among experts in establishing the distinction between research and evaluation. There are various schools of thought or typologies that exist which view research and evaluation from different lenses. The four different typologies are as follows – Evaluation as a sub-section of research This school of thought is premised on the notion that – Doing research does not necessarily require doing evaluation. However, doing evaluation always requires doing research. Research as a sub-section of Evaluation The second school of thought on the distinction between research and evaluation is that research is only a sub-section of evaluation. According to this notion, the research part of evaluation involves only collecting and analysing empirical data. Research and Evaluation not mutually exclusive The third school of thought views research and evaluation as two unrelated variables that are not mutually exclusive. An activity can be both research and evaluation – or neither. Research is about being empirical. Evaluation is about drawing evaluative conclusions about quality, merit or worth. Research and evaluation are dichotomous ‫نوعیتی‬ ‫دو‬ Another school of thought considers research and evaluation as two completely separate stream of producing knowledge. Evaluation is viewed as more interested in specific, applied knowledge, and more controlled by those funding or commissioning the evaluation. Research on the other hand, is considered as interested in producing generalisable knowledge which are theoretical and controlled by the researchers.
  • 70.
  • 71. Distinguish feature Research Evaluation(s) Agenda Generally involves greater control (though often constrained by funding providers); researchers create and construct the field. Work within a given brief / a set of “givens” – e.g. programme, field, participants, terms of reference and agenda, variables. Audiences Disseminated widely and publicly. Often commissioned and become the property of the sponsors; not for the public domain. Data sources and types More focused body of evidence. Has a wide field of coverage (e.g costs, benefits, feasibility, justifiability, needs, value for money) – so tends to employ a wider and more eclectic range of evidence from an array of disciplines and sources than research. Decision making Used for macro decision making. Used for micro decision making. Focus Concerned with how something works. Concerned with how well something works. Origins From scholars working in a field. Issued from/by stakeholders. Outcome focus May not prescribe or know its intended out comes in advance. Concerned with the achievement of intended outcomes. Ownership of data Intellectual property held by the researcher. Cedes ownership to the sponsor, upon completion. Participants Less (or no) focus on stakeholders. Focuses almost exclusively on stakeholders. Politics of the situation Provides information for others to use. May be unable to stand outside the politics of the purposes and uses of (or participants in) an evaluation. Purposes Contributes to knowledge in the field, regardless of its practical application; provides empirical information – i.e. “what is”. Conducted to gain, expand and extend knowledge; to generate theory, “discover” and predict what will happen. Designed to use the information / facts to judge the worth, merit, value, efficacy, impact and effectiveness of something – i.e. “what is valuable”. Conducted to assess performance and to provide feedback; to inform policy making and “uncover”. The concern is with what has happened or is happening. Relevance Can have wide boundaries (e.g. to generalise to a wider community); can be prompted by interest rather than relevance. Relevance to the programme or what is being evaluated is a prime feature. Has to take particular account of timeliness and particularity. Reporting May include stakeholders / commissioners of research – but may also report more widely (e.g. in publications Reports to stakeholders and commissioners of research. Scope Often (though not always) seeks to generalise (external validity) and may not include evaluation. Concerned with the particular – e.g. a focus only on specific programmes. Seeks to ensure internal validity and ofte has a more limited scope. Stance Active and proactive. Reactive. Standards for judging quality Judgements are made by peers; standards for which include: validity, reliability, accuracy, causality, generalizability, rigour. Judgements are made by stakeholders; standards for which also include: utility, feasibility, involvement of stakeholders, side effects, efficacy, fitness for purpose. Status An end in itself. A means to an end. Time frames Often ongoing and less time bound: although this is not the case with funded research. Begins at the start of a project and finishes at its end. Use of results Designed to demonstrate or prove. Provides the basis for drawing conclusions, and information on which others might or might not act – i.e. it does not prescribe. Designed to improve. Provides the basis for decision making; might be used to increase or withhold resources or to change practice.
  • 72. • Applied Research Applied Research refers to scientific research that seeks to solve practical (programmatic) problem(s). It's used to find solutions, cure illness, develop innovations and new strategies. We can use it to design new programmes.
  • 73. Survey Methodologies • “Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling and high response rates will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.” (1) There are three main types of survey methodologies, and each has their own risks and benefits. • Open-ended Questions Examples of open-ended questions: • “What do you think are the reasons some adolescents in this area start using drugs?” • “What would you do if you noticed that your daughter (school girl) had a relationship with a teacher?” • Partially Categorized Questions Example of a pre-categorized open-ended question: “How did you become a member of the Village Health Committee?” (4) Categorize the response into these options: •Volunteered •Elected at a community meeting •Nominated by community leaders •Nominated by the health staff •Other ____ Closed Questions • Did you eat any of the following foods yesterday? (6) • Peas, beans, lentils (yes/no) • Fish or meat (yes/no) • Eggs (yes/no) • Milk or cheese (yes/no) • Insects (yes/no)
  • 74. KAP Survey • KAP surveys are focused evaluations that measure changes in human knowledge, attitudes and practices in response to a specific intervention. The KAP survey was first used in the fields of family planning and population studies in the 1950s. KAP studies use fewer resources and tend to be more cost effective than other social research methods because they are highly focused and limited in scope. KAP studies tell us behave. Each study is designed for a specific setting and issue.(11) “The attractiveness of KAP surveys is attributable to characteristics such as an easy design, quantifiable data… concise presentation of results, cultural comparability, speed of implementation, and the ease with which one can train numerators.” (12) In addition, KAP studies bring to light the social, cultural and economic factors that may influence health recognition within the international aid community that improving the health of poor people across the world depends upon adequate understanding of the socio-cultural and economic aspects of the context in typically been gathered through various types of cross-sectional surveys, the most popular and widely used being the knowledge, attitude, and practice (KAP) survey. • KAP Research Protocols • KAP Surveys and Public Health The Shortcomings of KAP Surveys • Data Can Be Hard to Interpret Accurately • Lack of Standardized Approach to Validate Findings • Analyst Biases in KAP Surveys • Other Criticisms • A main criticism of KAP surveys is that their findings generally lead to prescriptions for mass behavior modification instead of targeting interventions towards individuals. For example, a study which used KAP diffuse behaviors in undifferentiated populations are not productive in low-seroprevalence populations, especially when the objective is to design interventions to avert further infection. The failure of KAP surveys behavioral data for individuals and for populations makes them fundamentally flawed for such purposes.” Other major problems with KAP surveys are that investigators use the surveys to explain health behavior and action. (29) A study on malaria control in Vietnam found that though respondents had a surprisingly high level of knowledge and awareness regarding malaria, “the findings are of limited value because of the preventive actions and health-seeking behaviors. Anecdotal evidence suggests there are deficiencies in these important practices, but the study design did not permit us to explore these.” (30) In addition, though fail to explain why and when certain treatment practices are chosen. In other words, they fail to explain the logic behind treatment-seeking practices. • The Alternative • KAP surveys can be useful when the research plan is to obtain general information about public health knowledge and sociological variables. However,“if the objective is to study health-seeking knowledge, available, including focus group discussions, in-depth interviews, participant observation, and various participatory methods.” (32) The preferred use of qualitative surveys and research is corroborated by a study generated useful findings, an initial, qualitative investigation (eg. observation and focus group discussions) to explore the large numbers of potential influences on behavior and exposure risk would have provided have strengthened its validity and generated additional information.” (33) A study conducted by Agyepong and Manderson (1999) also confirms this notion and argues “that truly qualitative methods, such as are vital foundations for exploratory investigations at the community level, and should precede and underpin population-level approaches, such as KAP surveys.” • Conclusion • The survey is critical to designing public health interventions and assessing their impact. There are a variety of different methodologies that can be used when designing surveys: open-ended questions, partially has its own benefits and drawbacks, though partially categorized questions are considered to yield the most accurate and reliable data. KAP surveys explore respondents’ knowledge, attitudes and practices characteristics, knowledge, attitudes and practices that may serve to explain health risks and behaviors. Though they are very useful for obtaining general information about sociological and cultural variables, study or survey.
  • 75. [1] CARE Impact Guidelines, October 1999. [2] PC/LogFrame (tm) 1988-1992 TEAM technologies, Inc. [3] Results Oriented Assistance Sourcebook, USAID, 1998. [4] The Logical Framework Approach to portfolio Design, Review and Evaluation in A.I.D.: Genesis, Impact, Problems and Opportunities. CDIE, 1987. [5] A Guide to Appraisal, Design, Monitoring , Management and Impact Assessment of Health & Population Projects, ODA [now DFID], October 1995 [6] Guide for the use of the Logical Framework Approach in the Management and Evaluation of CIDA’s International Projects. Evaluation Division. [7] ZOPP in Steps. 1989. [8] Project Cycle Management: Integrated Approach and Logical Framework, Commission of the European Communities Evaluation Unit Methods and Instruments for Project Cycle Management, No. 1, February 1993 [9] Project Appraisal and the Use of Project Document Formats for FAO Technical Cooperation Projects. Pre-Course Activity: Revision of Project Formulation and Assigned Reading. Staff Development Group, Personnel Division, August 19 [10] UNDP Policy and Program Manual [11] The Logical Framework Approach (LFA). Handbook for Objectives-oriented Project Planning. [12] Project Planning in UNHCR: A Practical Guide on the Use of Objectives, Outputs and Indicators for UNHCR Staff and Implementing Partners. Second Ver. March 2002. [13] AusAID NGO Package of Information, 1998 COMPARISONS BETWEEN TERMINOLOGIES OF DIFFERENT DONOR AGENCIES for RESULTS / LOGICAL FRAMEWORKS Compiled by Jim Rugh for CARE International and InterAction’s Evaluation Interest Group This table has been referred to as “’The Rosetta Stone of Logical Frameworks”
  • 76. CHARACTERISTICS OF A CASE STUDY 1. Intensive Study 2. Indepth Examination 3. Systematic way ffor collection, analysis, reporting 4. Understanding why and What? 5. Generating and Testing Hypothesis 6. Involvement of Stakeholders in identification of variables 7. Ratifies data/numbers Qualitative Research Method (Who, What, when, where, why and How?) It is important method because: 1. Indepth analysis for informed decision making 2. Inspires for Innovation and Reflection 3. Validate Hypothesis 4. Integrated study of influencing factors leading to holistic views. 5. Useful to build credibility and evidence 6. Helps in Sharing knowledge to different target audeiences. Success Story As part of the world cafe exercise certain characteristics of a success story were discussed and agreed upon. Some of them are: •Short, clear, simple and readable •Reflecting positive change and transformation. •Highlighting project interventions. •Empowering for communities •Generates a buzz/attracts immediate attention COMPARISONS BETWEEN SUCCESS STORY & CASE STUDY
  • 77. Good Practices Best Practice. Something that we have learned from experience on a number of similar projects around the world. This requires looking at a number of “lessons- learned” from projects in the same field and noticing a trend that seems to be true for all projects in thatfield. Key characteristics: Replicable/adaptable Sustainability Proven process & methodology (within a geographical location) Reflect the process Community owned & tested procedures Tested innovations Reasons they are useful to generate: Sustain over a period of time Provide roadmap for scaling up Saves time Using knowledge for further replication Sharing knowledge by providing options & choices lesson learned: Key characteristics and the reasons for sharing lessons learned A short, simple description of something we’ve learned from experience on a specific project or program. It should be supported with evidence from our monitoring and evaluation. Lessons- learned should be useful to other people implementing similar projects around the world. Key characteristics Lessons learned come from real experiences, feedback and other processes in the project cycle. • These lessons learned are factual • Lessons learned are generated within a time frame • They are derived from reflection, analysis and other qualitative and quantitative data • Lessons learned can be generated individually and through the involvement of other stakeholders. Reasons Lessons learned can be an important input in project implementation • It helps in improving outputs and outcomes. • Lessons learned used effectively can save time, energy and resources • It can be huge knowledge base for designing future activities, innovations. • Some lessons can be used as advocacy material and can be useful for initiating policy dialogue.
  • 78. Social mobilization is the movement of a group of people on the socio-economic scale over time, for example, winning the lottery when living in poverty would rise your socio-economic status to a more wealthier and comfortable position. UNICEF defines social Mobilization, as it is a broad Scale movement to engage People’s Participation in achieving a specific development goal through self-reliant effort. Social Mobilization is the process of getting people especially rural poor, organized in order to improve their own situation. The philosophy is “Helping people to help themselves” Social mobilisation is the process of bringing together all stakeholders to raise people’s awareness of and demand for a particular programme (health etc.), to assist in the delivery of resources and services and to strengthen community participation for sustainability and self-reliance. Social mobilisation recognizes that sustainable social and behavioural change requires many levels of involvement—from individual to community to policy and legislative action. Isolated efforts cannot have the same impact as collective ones. Community mobilisation is the process of engaging communities to identify community priorities, resources, needs and solutions in such a way as to promote representative participation, good governance, accountability and peaceful change. Sustained mobilisation takes place when communities remain active and empowered after the programme ends. Community mobilizing is when experts drive the action of an issue and they are the ones who know the solutions. Community mobilizing is categorized as issue oriented, its process is driven by action, and it can be a confrontational process. On the other hand community organizing is when issues arise out of a community consensus. This process is goal oriented and not confrontational because everyone agrees that this issue exists and is important. A hybrid of both community mobilizing and community organizing efforts are both crucial if a coalition wants to achieve real outcomes. Difference between SM & CM Social mobilization means mentally setting of human beings for achieving a goal, while community mobilization means a specific group of people readiness to achieve the desire levels of goals and aims.
  • 79. Social organization is the way a group of people interact in their community, for example, how they divide up power and access to resources and specific goods. THEME Social organization is a method through which a fabric of different components of society are made to achieve optimum utilization of human and other resources. PRACTICES Social organization is a method through which a group of people is organized to foster the process of development and ensure community participation on sustainable basis.
  • 80. Most Significant Change is an approach that collects a series of stories from program participants that are analyzed in successive rounds by stakeholder groups to emerge with the most significant or meaningful examples of changes brought about during the program. The most significant change (MSC) technique is a form of participatory monitoring and evaluation. It is participatory because many project stakeholders are involved both in deciding the sorts of change to be recorded and in analysing the data. It is a form of monitoring because it occurs throughout the program cycle and provides information to help people manage the program. It contributes to evaluation because it provides data on impact and outcomes that can be used to help assess the performance of the program as a whole. Essentially, the process involves the collection of significant change (SC) stories emanating from the field level, and the systematic selection of the most significant of these stories by panels of designated stakeholders or staff. The designated staff and stakeholders are initially involved by ‘searching’ for project impact. Once changes have been captured, various people sit down together, read the stories aloud and have regular and often in-depth discussions about the value of these reported changes. When the technique is implemented successfully, whole teams of people begin to focus their attention on program impact. Most Significant Change
  • 81. Governance Governance in general relates to the process of decision-making and how those decisions are implemented. Accountability is an essential characteristic of good governance, where leaders are accountable for their decisions to people affected by those decisions. When these processes are institutionalized they become a system of government. Governance is good when it is accountable, transparent, just, responsive and participatory. Good governance is a goal of community mobilization, plus condition for all development initiatives to be sustainable. Triangulation. Data collection from three different sources about the same subject. This is considered the best way to ensure that our information is valid. For example, if we want to know about the effects of a community mobilization project, we might collect data via 1) interviews with key participants, including our own staff 2) a document review to understand exactly what services were delivered and in what amounts 3) focus groups and/or a survey of project participants. This helps us avoid the natural biases of any one method of data collection. Although three different sources are not always possible, the primary point is to avoid reliance on a single source or perspective. Targets. Sometimes called “milestones” or “benchmarks”, these tell us what we plan to achieve at specific points in the life of our projects or programs. We use them to monitor our progress toward completion of our activities.
  • 82. Formal education is classroom-based, provided by trained teachers. Learning that occurs in an organised and structured environment (such as in an education or training institution or on the job) and is explicitly designated as learning (in terms of objectives, time or resources). Formal learning is intentional from the learner’s point of view. It typically leads to certification. Earning that occurs in an organised and structured context (in a school/training centre or on the job) and is explicitly designated as learning (in terms of objectives, time or learning support). Formal learning is intentional from the learner’s point of view. It typically leads to certification.” “What students are taught from the syllabus.” Non-formal Education “Learning resulting from daily activities related to work, family or leisure. It is not organised or structured in terms of objectives, time or learning support. Informal learning is in most cases unintentional from the learner’s perspective. Comments: Informal learning outcomes may be validated and certified; Informal learning is also referred to as experiential or incidental/random learning.” Non-formal education has an adopted strategy where the student attendance is not fully required. The educative progress in non-formal education has a more flexible curricula and methodology. The activities or lessons of the non- formal education take place outside the institutions or schools. Here the needs and interest of the students are taken into consideration. There are 2 features in the non-formal education that need to be constant: Centralization of the process on the student, as to his previously identified needs and possibilities; The immediate usefulness of the education for the student’s personal and professional growth. Because of the importance of the interests and needs of the students, this form of education meets the individual needs better. Non-formal education is focused on the student and this will have as result that the student participates more. When the needs of the students change the non-formal education can react quicker because of its flexibility Non-formal learning is a loosely defined term covering various structured learning situations, such as swimming sessions for toddlers, community-based sports programs and conference style seminars, which do not either have the level of curriculum, syllabus, accreditation and certification associated with 'formal
  • 83. Informal education happens outside the classroom, in after-school programs, community-based organizations, museums, libraries, or at home. “Informal education is that learning which goes on outside of a formal learning environment such as a school, a college or a university, therefore it is learning outside of the classroom/lecture theatre; however more can be said by way of providing a definition of the term. Informal education can be seen as ‘learning that goes on in daily life’, and/or ‘learning projects that we undertake for ourselves“(Smith, 2009). “learning that goes on in daily life and can be received from daily experience, such as from family, peer groups, the media and other influences in a person’s environment” (Oñate, 2006). “It encompasses a huge variety of activities: it could be a dance class at a church hall, a book group at a local library, cookery skills learnt in a community centre, a guided visit to a nature reserve or stately home, researching the National Gallery collection on-line, writing a Wikipedia entry or taking part in a volunteer project to record the living history of [a] particular community.” (DIU&S, 2009: p4).
  • 84. Protection encompasses all activities aimed at ensuring full respect for the rights of the individual in accordance with human rights law, international humanitarian law (which applies in situations of armed conflict) and refugee law.1 •a legal or other formal measure intended to preserve civil liberties and rights. •Child protection is the protection of children from violence, exploitation, abuse and neglect. Article 19 of the UN Convention on the Rights of the Child provides for the protection of children in and out of the home.
  • 85. Cluster sampling refers to a type of sampling method . With cluster sampling, the researcher divides the population into separate groups, called clusters. Then, a simple random sample of clusters is selected from the population. For example, a researcher wants to survey academic performance of high school students in Spain. He can divide the entire population (population of Spain) into different clusters (cities). Then the researcher selects a number of clusters depending on his research through simple or systematic random sampling. Sampling In statistics, quality assurance, and survey methodology, sampling is concerned with the selection of a subset of individuals from within a statistical population to estimate characteristics of the whole population. ... In business and medical research, sampling is widely used for gathering information about a population. In statistics, sampling allows you to test a hypothesis about the characteristics of a population. Sampling is the process of selecting units (e.g., people, organizations) from a population of interest so that by studying the sample we may fairly generalize our results back to the population from which they were chosen. ... Probability Sampling. No probability Sampling. Lot Quality Assurance Sampling ( LQAS) Fundaments • LQAS is used for the monitoring of ( mainly health) programme coverage indicators, based on a stratified simple random sample of a small number of geographical units per stratum, also called a ‘lot’.• It is seen as a good alternative to more complex and often more costly sampling techniques.• The method is particularly suitable for frequently conducted monitoring surveys on programme coverage and other performance indicators in settings that do not require a high level of statistical precision.• LQAS tests whether a given threshold value is achieved or not, rather than producing estimates for an indicator, although different‘lots’ can be combined in order to estimate overall programme performance in terms of coverage.
  • 86. Types of sampling: sampling methods Sampling in market research is of two types – probability sampling and non-probability sampling. Let’s take a closer look at these two methods of sampling. 1.Probability sampling: Probability sampling is a sampling technique where a researcher sets a selection of a few criteria and chooses members of a population randomly. All the members have an equal opportunity to be a part of the sample with this selection parameter. And/or researchers can calculate the probability of any single person in the population being selected for the study. These studies provide greater mathematical precision and analysis. 2.Non-probability sampling: In non-probability sampling, the researcher chooses members for research at random. This sampling method is not a fixed or predefined selection process. This makes it difficult for all elements of a population to have equal opportunities to be included in a sample. Or/and researchers cannot calculate the probability of being in the study for individuals within the population These samples tend to be less accurate and less representative of the larger population.
  • 87. Probability sampling is a sampling technique in which researchers choose samples from a larger population using a method based on the theory of probability.This sampling method considers every member of the population and forms samples based on a fixed process. For example, in a population of 1000 members, every member will have a 1/1000 chance of being selected to selected to be a part of a sample. Probability sampling eliminates sampling bias in the population and gives all members a fair chance to be included in the sample. There are four types of probability sampling techniques: Random Sampling/Simple Random Sample (SRS) Under Random sampling, every element of the population has an equal probability of getting selected. It is a reliable method of obtaining information where every single member of a population is chosen randomly, merely by chance. Below fig. shows the pictorial view of the same — All the points collectively represent the entire population wherein every point has an equal chance of getting selected. E.g. lottery Method, Table of random numbers Stratified Sampling (it is a variation of random sample) strata means group Under stratified sampling, we group the entire population into subpopulations by some common property. For example — Class labels in a typical ML classification task. We then randomly sample from those groups individually, such that the groups are still maintained in the same ratio as they were in the entire population. Below fig. shows a pictorial view of the same — We have two groups with a count ratio of x and 4x based on the colour, we randomly sample from yellow and green sets separately and represent the final set in the same ratio of these groups. Professionals may divide strata into categories, including: Age, Gender, Income, & Profession etc. Cluster Sampling In Cluster sampling, we divide the entire population into subgroups, wherein, each of those subgroups has similar characteristics to that of the population when considered in totality. Also, instead of sampling individuals, we randomly select the entire subgroups. As can be seen in the below fig. that we had 4 clusters with similar properties (size and shape), we randomly select two clusters and treat them as samples. Real-Life example — Class of 120 students divided into groups of 12 for a common class project. Clustering parameters like (Designation, Class, Topic) are all similar over here as well. For example, you could examine the dining habits of residents in a certain state. You can divide these. residents into clusters based on the county they live in and then use a random sampling method to select eight counties for the study. Cluster sampling differs from strata sampling because some clusters are unrepresented in the final sample, whereas researchers use members from every stratum in stratified sampling
  • 88. Systematic/Quasi Random Sampling Systematic sampling is about sampling items from the population at regular predefined intervals(basically fixed and periodic intervals). This type of sampling method has a predefined range, and hence this sampling technique is the least time-consuming. For example, you can compile a list of 250 individuals in a population and use every fifth person as a study participant. For example — Every 5th element, 21st element and so on. This sampling method tends to be more effective than the vanilla random sampling method in general. Below fig. shows a pictorial view of the same — We sample every 9th and 7th element in order and then repeat this pattern. Systematic sampling aims to eliminate bias and can be easier to achieve than random sampling. However, systematic sampling differs from simple random sampling because the systematic method doesn’t offer the same probability of being chosen for every member of a population. Multistage sampling Under Multistage sampling, we stack multiple sampling methods one after the other. For example, at the first stage, cluster sampling can be used to choose clusters from the population and then we can perform random sampling to choose elements from each cluster to form the final set. Below fig. shows a pictorial view of the same — Multistage sampling occurs when you use different sampling methods at different stages of the same study. This method is helpful for large population sizes. For instance, consider determining how much support a new government initiative has across the country. It's not practical to list every person in the country, so you may start by creating clusters in stage one for each state or geographic region, like southwest, southeast, northeast and northwest. In the next stage, you may further divide these clusters into strata and choose random samples from each stratum. Bias in sampling Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.There are five important potential sources of bias that should be considered when selecting a sample, irrespective of the method used. Sampling bias may be introduced when:1 1.Any pre-agreed sampling rules are deviated from 2.People in hard-to-reach groups are omitted 3.Selected individuals are replaced with others, for example if they are difficult to contact 4.There are low response rates 5.An out-of-date list is used as the sample frame (for example, if it excludes people who have recently moved to an area) Sampling definitions: •Population The total number of people or things you are interested in •Sample A smaller number within your population that will represent the whole •Sampling Sampling error is any type of bias that is attributable to mistakes in either drawing a sample or determining the sample size.
  • 89. In non-probability sampling, the sample is selected based on non-random criteria, or subjective and not every member of the population has a chance of being included. The non-probability method is a sampling method that involves a collection of feedback based on a researcher or statistician’s sample selection capabilities and not on a fixed selection process. In most situations, the output of a survey conducted with a non-probable sample leads to skewed results, which may not represent the desired target population. But, there are situations such as the preliminary stages of research or cost constraints for conducting research, where non-probability sampling will be much more useful than the other type. Common non-probability sampling methods include convenience sampling, voluntary response sampling, purposive/judgemental sampling, snowball sampling, and quota sampling. Convenience Sampling (accidental/ haphazard sample) Under convenience sampling, the researcher includes only those individuals who are most accessible and available to participate in the study. Below fig. shows the pictorial view of the same — Blue dot is the researcher and orange dots are the most accessible set of people in orange’s vicinity. researchers use random people as testing subjects. For example, a researcher may sample a group of people walking by on a street. In this case, the researcher has no control of the sample group itself. This type of sampling is both cost and time efficient as researchers can gather people to sample relatively quickly. Voluntary Sampling Under Voluntary sampling, interested people usually take part by themselves by filling in some sort of survey forms. A good example of this is the youtube survey about “Have you seen any of these ads”, which has been recently shown a lot. Here, the researcher who is conducting the survey has no right to choose anyone. Below fig. shows the pictorial view of the same — Blue dot is the researcher, orange one’s are those who voluntarily agreed to take part in the study. Snowball Sampling Under Snowball sampling, the final set is chosen via other participants, i.e. The researcher asks other known contacts to find people who would like to participate in the study. Below fig. shows the pictorial view of the same — Blue dot is the researcher, orange ones are known contacts(of the researcher), and yellow ones (orange’s contacts) are other people that got ready to participate in the study. And/or Researchers use snowball sampling when the study involves sampling groups of people who are more difficult to gather. To gather people to survey, researchers may ask the test subjects they do have to contact and nominate others to take part in the study. While this can be an effective way to gather participants, it makes the factors of the test group more difficult to control. And/or If the population is hard to access, snowball sampling can be used to recruit participants via other participants. The number of people you have access to “snowballs” as you get in contact with more people. And/or Snowball sampling is also known as a chain-referral sampling technique Quota sampling involves researchers creating a sample based on predefined traits. For example, the researcher might gather a group of people who are all aged 65 or older. This allows researchers to easily gather data from a specific demographic. And/or the selection members in this sampling technique happens based on a pre-set standard. In this case, as a sample is formed based on specific attributes, the created sample will have the same qualities found in the total population. It is a rapid method of collecting samples. The quota sampling is classified into two different types, such as: Controlled Quota Sampling, Uncontrolled Quota Sampling Controlled Quota Sampling: If the sampling imposes restrictions on the researcher’s/Statisticians choice of sample, then it is known as controlled quota sampling. In this method, the researcher can be able to select the limited samples. Uncontrolled Quota Sampling: If the sampling does not impose any restrictions on the researcher’s/Statisticians choice of sample, then it is known as uncontrolled quota sampling. In this process, the researcher can select the samples of their interest. Judgmental/Purposive /Authoritative/selective/subjective Sampling In judgmental sampling, the chosen research subjects are solely at the discretion of the researcher. In this case, the researcher is responsible for picking individuals who they feel would be a positive addition to the study. To create their sample, researchers may ask prospective individuals a few questions relating to the study and then decide based on thei answers. And/or the processes whereby the researcher selects a sample based on experience or knowledge of the group to be sampled….called “Judgment” sampling. It is often used in qualitative research, where the researcher wants to gain detailed knowledge about a specific phenomenon rather than make statistical inferences, or where the population is very small and specific. An effective purposive sample must have clear criteria and rationale for inclusion. Convenience sampling occurs when researchers choose respondents based on elements of convenience, such as being near respondents or being close friends with respondents. For instance, a survey conductor may poll people at a nearby park. Convenience sampling is easier and cheaper than random sampling, but you cannot generalize the results, which makes it less reliable. Researchers have nearly no authority to select the sample elements, and it’s purely done based on proximity and not representativeness. This non-probability sampling method is used when there are time and cost limitations in collecting feedback. In situations where there are resource limitations such as the initial stages of research, convenience sampling is used. Voluntary response sampling refers to soliciting responses from volunteers. Unlike other studies, participants select themselves rather than being selected by those carrying out the research. For instance, a teaching assistant may send an evaluation survey via email asking for feedback on their performance. Voluntary response sampling is typically unrepresentative and not random, as only respondents with strong opinions are likely to participate.
  • 90. Difference between probability sampling and non- probability sampling methods How to choose the correct sample size Finding the best sample size for your target population is something you’ll need to do again and again, as it’s different for every study. To make life easier, we’ve provided a sample size calculator. To use it, you need to know your •Population size •Confidence level •Margin of error (confidence interval) If any of those terms are unfamiliar, have a look at our blog post on determining sample size for details of what they mean and how to find
  • 91. Important terminologies... Population The population refers to the entire group of people, events or things of interest that the researcher wishes to investigate. Universe the larger group from which the individuals are selected to participate in the Study Element An element is the single member of the population. Sample A sample is a subset of the population. it comprises some members from it./or the representatives selected for a study whose characteristics exemplify the larger group from which they were selected Sampling Unit The sample unit is the element or the set of elements that is available for selection in some stage of the sampling process. Subject A subject is a single member of the sample just as an

Editor's Notes

  1. Life circumstances adapted from PATH Project (2008) What do people with multiple & complex needs want from services?