3. • Program evaluation is a way to evaluate
the specific projects and activities
community groups may take part in,
rather than to evaluate an entire
organization or comprehensive
community initiative.
• An evaluation study addresses the quality
quality of medical care, utilization and
coverage of health services , benefits to
community health in terms of morbidity
and mortality reduction and
improvement in the health status of the
recipients of care.
INTRODUCTION
4. • Program
Program is any set of related activities
undertaken to achieve an intended
outcome; any organized public health
action.
• Evaluation:
A systematic process to assess the
achievement of the stated objectives of a
Programme ,its adequacy, efficiency, and its
acceptance by all parties involved.
• Monitoring:
A planned, systematic process of
observation that closely follows a course of
activities, and compares what is happening
with what is expected to happen
DEFINITIONS
5. DIFFERENCE BETWEEN MONITORING AND EVALUATION
Monitoring
• Purpose is to track
implementation progress
through periodic data collection.
• Goal is to provide early
indications of progress (or lack
thereof).
• It determines programme
efficiency
• It establishes standard of
performance at the activity level
• It alerts the management , of
discrepancy
• It identifies strong & weak
points of programme operation
& objectives operation
Evaluation
• Purpose is to determine
effectiveness of a specific
program or model and
understand why a program may
or may not be working.
• Goal is to improve programs.
• It determines programme
effectiveness
• It identifies inconsistencies
between programme
• It suggests changes in
programme procedure objective
and activities
• It identifies the possible side
effects of programme
6. • To review the implementation of services provided by health programmes so
as to identify problems and recommend necessary revisions of the
programme.
• To assess progress towards desired health status at national or state levels
and identify reasons for gap, if any.
• To contribute towards better health planning
• To document results achieved by a project funded by donor agencies.
• To know whether desired health outcomes are being achieved and identify
remedial measures.
• To improve health programmes and the health infrastructure.
• Allocation of resources in current and future programme.
• To render health activities more relevant, more efficient and more effective.
NEED FOR EVALUATION OF HEALTH SERVICES
9. Formative evaluation ensures that a program or program activity is feasible,
appropriate, and acceptable before it is fully implemented. It is usually conducted when
a new program or activity is being developed or when an existing one is being adapted
or modified.
Process/implementation evaluation determines whether program activities have been
implemented as intended.
Outcome/effectiveness evaluation measures program effects in the target population by
assessing the progress in the outcomes or outcome objectives that the program is to
achieve.
Impact evaluation assesses program effectiveness in achieving its ultimate goals.
10.
11.
12. Determine what is to be evaluated
Establish standards and criteria
Plan the methodology to be applied
Gather information
Analyze the results
Take action
Re-evaluate
STEPS OF EVALUATION
13. 1. Determine what is to be evaluated:
There are 3 types of evaluation:
• this is evaluation of whether facilities, equipment, manpower and
organization meet a standard accepted by experts as good
Evaluation of structure:
• in way in which various activities of the program are carried out
Evaluation of process:
• this is concerned with end results, whether persons using the
services experience measurable benefits or not
Evaluation of outcome:
14. 2. Establishment of standard and criteria:
Standards and criteria must be established to determine how well the desired
objectives have been attained.
• Eg- Physical facilities and equipment
Evaluation-structural criteria:
• Eg- every school going child should receive dental check up
once in 6 months
Process criteria:
• alterations in health status (positive, negative) or behaviour
resulting from health care (satisfaction, dissatisfaction) or the
educational process
Outcome criteria:
15. • Effectiveness: It is the extent to
which the underlying problem is
prevented or alleviated. The ultimate
measures of effectiveness will be the
reduction in morbidity and mortality
rates.
• Efficiency: It is a measure of how well
resources (money, men, material and
time) are utilized to achieve a given
effectiveness.
Relevance
Adequacy
Accessibility
Acceptability
Effectiveness
Efficiency
Impact
Components of Evaluation
16. • Relevance: Relevance or requisiteness relates to the appropriateness of the
service, whether it is needed at all
• Adequacy: It implies that sufficient attention has been paid to certain
previously determined courses of action.
• Accessibility: It is the proportion of the given population that can be
expected to use a specified facility, service, etc. The barriers to accessibility
may be physical (e.g ., distance, travel, time); economic (e.g., travel cost, fee
charged); or social and cultural (e.g. , caste or language barrier).
• Acceptability: The service provided may be accessible, but not acceptable to
all, e.g., screening for rectal cancer.
• Impact: It is an expression of the overall effect of a programme, services or
institution on health status and socio-economic development. For example,
as a result of malaria control in India, not only the incidence of malaria
dropped down , but all aspects of life - agricultural, industrial and social-
showed an improvement.
17. Some Possible Endpoints for Measuring Success of a Vaccine Program
1. Number (or proportion) of people immunized
2. Number (or proportion) of people at (high) risk who are immunized
3. Number (or proportion) of people immunized who show serologic response
4. Number (or proportion) of people immunized and later exposed in whom
clinical disease does not develop
5. Number (or proportion) of people immunized and later exposed in whom
clinical or subclinical disease does not develop
Example evaluating the effectiveness of a vaccine program:
18. 3. Planning the methodology: A format must be prepared for gathering the
desired information
4. Gathering information: the type and amount of information required will
depend on the purpose of the evaluation
Epidemiological evaluation:
Independent Variable-
Health Service
Dependent Variable-
Reduction in adverse
health effects
19. Various Study Designs
Randomized design Non-randomized design
Before after
design
Simultaneous
Nonrandomized
Design
Comparison of utilizers and
non utilizers
Comparison of eligible and
non eligible
Combination
design
Case control
studies
Evaluation using individual data:
20. Example:
Evaluation of Multiphasic screening in South East London, led to withholding
of vast outlay of resources required to mount a national programme
Randomized design:
• Eliminates problem of
selection bias.
• For ethical and practical
reasons, randomizing patients
to receive no care is not
considered.
• Assign different types of care
and then evaluate.
21. Demerits of randomized designs:
• RCT trials are logistically complex and extremely
expensive.
• Ethical problems
• Long time for completion, so relevance is questionable
• Alternative approach- outcome research.
22. Non- randomized design:
Before- After Design (Historic controls):
• Data obtained in each of two periods are not comparable in terms of quality
and completeness.
• Difference is due to programme or due to other factors which changed over
time like housing, nutrition, and lifestyle.
• Problem of selection exists
Simultaneous Nonrandomized Design (Program- No Program):
• A cohort study in which the type of health care being studied represents
“exposure”
• Problem arises as in how to select exposed and non-exposed group for study
23. Comparison of utilizers and non utilizers:
• To compare a group of people who use a health service with a group who do
not.
• Problem of self selection exists
• Address this problem by characterizing the prognostic profile of people in
both groups.
• Inability to say someone not to utilize the programme.
24. Comparison of eligible and non eligible populations:
• Assumption being made that eligibility and non-eligibility is not related to
either prognosis or outcome,
• So no selection bias is being introduced
• For eg: employer or census tract of residence
• May relate to socioeconomic status
25. Combination Designs:
• Combination of both the designs viz. before-after design ,program-no
program.
• To compare the morbidity level in people who receive care and who do
not.
Case Control studies:
• The case-control design has been applied primarily to etiologic studies, when
appropriate data are obtainable, this design can serve as a useful, but limited,
surrogate for randomized trials.
• This design requires definition and specification of cases, it is most applicable
to studies of prevention of specific diseases. The “exposure” is then the
specific preventive or other health measure that is being assessed.
• Most health services research, stratification by disease severity and by other
possible prognostic factors is essential for appropriate interpretation of the
26. Outcomes Research
• Denotes studies comparing the effects of two or more health care
interventions or modalities- such as treatment, forms of health care
organization, or type and extent of insurance coverage and provider
reimbursement on health or economic outcomes.
• Uses data from large data sets that were derived from large population.
Advantages:
• Refers to real world population and issue of representativeness or
generalizability is minimized
• As the data already exists, analysis can be completed and results generated
rapidly
• Sample size is not a problem except when smaller sub-groups are examined
• Cost effective.
Evaluation using group data:
27. Disadvantages:
• Data gathered for fiscal and administrative purpose may not suit research
questions addressed in study
• New questions (as knowledge is more complete now) wouldn’t have been
framed
• Data on independent and dependent variable may be limited
• Data relating to possible confounders may be inadequate or absent
• Certain variables that are relevant today, were not included in original data set
• Investigator may create surrogate variable or may change original question
which he wanted to address
• Investigator becomes progressively more removed from the individual being
studied
28. 5. Analysis of results: the analysis and interpretation of data should take place
within shortest time feasible, provided discuss the evaluation results
6. Taking action: for evaluation to be truly productive, actions designed to
support, strengthen or otherwise modify the services involved, need to be taken
7. Re-evaluate: evaluation is an ongoing process aimed mainly at rendering
health activities more relevant, more efficient and more effective
29. Two indices used in ecologic studies of health services:
Avoidable mortality
• Avoidable mortality analyses
assumes that the rate of
“avoidable death”* should vary
inversely with the availability,
accessibility, and quality of
medical care in different
geographic regions.
Health Indicators
• An indicator is a standardized,
objective measure that
allows—
• A comparison among health
facilities and among countries
• A comparison between
different time periods
• A measure of the progress
toward achieving program
goals.
*Avoidable deaths are defined as preventable, amenable, or both,
where each death is counted only once.
30.
31. The 5 year RCH phase II was launched in 2005 with a vision to bring about
outcomes as envisioned in the Millennium Development Goals, the National
Population Policy 2000 (NPP 2000), the Tenth Plan, the National Health
Policy 2002 and Vision 2020 India, minimizing the regional variations in the
areas of RCH and population stabilization through an integrated, focused,
participatory programme meeting the unmet needs of the target
population, and provision of assured, equitable, responsive quality services.
• Goal: “Health For All”
• Objective: Population stabilization by 2045
• Programme: Comprehensive R.C.H services
• Monitoring & Evaluation: RCH indicators/feedback data
Example:
32. Accessibility Indicators:
• No. of eligible couples registered/ANM
• No. of Antenatal Care sessions held as
planned
• % of sub-Centers with no ANM
• % of sub-Centers with working
equipment of ANC
• % ANM/TBA without requisite skill
• % of sub centres with infant weighing
machine
Quality Indicators:
• % Pregnancy Registered before 12
weeks
• % ANC with 5 visits
• % ANC receiving all RCH services
• % High risk cases referred
• % High risk cases followed up
• % deliveries by ANM/TBA
• %PNC with 3 PNC visits
Impact Indicators:
• % Deaths from maternal causes
• Maternal mortality ratio
• Prevalence of maternal morbidity
• % Low birth weight
• Neonatal mortality ratio
• Prevalence of post natal maternal
morbidity
• Couple protection rate
Example cont…:
33.
34. • The CDC assembled an Evaluation
Working Group comprised of experts in
the fields of public health and evaluation
which resulted in a community toolbox.
• "Recommended Framework for Program
Evaluation in Public Health Practice," by
Bobby Milstein, Scott Wetterhall, and the
CDC Evaluation Working Group in 1997.
35. Steps of framework taken in any evaluation are:
Step 1: Engage stakeholders.
Step 2: Describe the program.
Step 3: Focus the evaluation design.
Step 4: Gather credible evidence.
Step 5: Justify conclusions.
Step 6: Ensure use and share lessons learned.
The second element of the framework is a set of
30 standards for assessing the quality of
evaluation activities, organized into the
following four groups:
Standard 1: utility,
Standard 2: feasibility,
Standard 3: propriety, and
Standard 4: accuracy.
36.
37. • The Guidelines have been updated to reflect
feedback from trainings and interviews, and
recent changes in UNDP, bringing them into line
with the new UNDP Evaluation Policy and the
United Nations Sustainable Development
Cooperation Framework (UNSDCF).
• The following documents are of particular
importance for the UNDP evaluation architecture:
• UNDP, 2019, Revised UNDP Evaluation Policy.
• UNDP, 2020, Social and Environmental
Standards.
• UNDP, 2018, Gender Equality Strategy 2018-
2021.
• The Sustainable Development Goals(SDGs),
38. CONCLUSION
• Monitoring and evaluating transitions in global health programs can
bring conceptual clarity to the transition process, provide a
mechanism for accountability, facilitate engagement with local
stakeholders, and inform the management of transition through
learning.
• Further investment and stronger methodological work are needed.
• Monitoring and evaluation of projects can be a powerful means to
measure their performance, track progress towards achieving desired
goals, and demonstrate that systems are in place that support
organisations in learning from experience and adaptive management.
39. REFERENCES
• Gordis Leon. Epidemiology. 4th edition. Philadelphia: Elsevier Saunders:2009
• Park K. Park’s Textbook of Preventive and Social Medicine. 22nd ed.
• CDC. Framework for program evaluation in public health. MMWR 1999;48(RR-
11).
• WHO: UNFPA. Programme Manager’s Planning Monitoring & Evaluation
Toolkit. Division for oversight services, August 2004,
• UNICEF. “A UNICEF Guide for Monitoring and Evaluation: Making a
Difference?”, Evaluation Office, New York, 1991.
• https://www.ruralhealthinfo.org/toolkits/health-promotion/4/types-of-
evaluation
• https://stacks.cdc.gov/view/cdc/5204
• https://mainweb-v.musc.edu/vawprevention/research/programeval.shtml
Editor's Notes
Effectiveness: It is the extent to which the underlying problem is prevented or alleviated. The ultimate measures of effectiveness will be the reduction in morbidity and mortality rates.
Efficiency: It is a measure of how well resources (money, men, material and time) are utilized to achieve a given effectiveness.
Phases and stages of evaluation
Formative evaluation: Formative evaluation occurs during program development and implementation. It provides information on achieving program goals or improving your program.
Process evaluation: Process evaluation is a type of formative evaluation that assesses the type, quantity, and quality of program activities or services.
Outcome evaluation: Outcome evaluation can focus on short- and long-term program objectives. Appropriate measures demonstrate changes in health conditions, quality of life, and behaviors.
Impact evaluation: Impact evaluation assesses a program's effect on participants. Appropriate measures include changes in awareness, knowledge, attitudes, behaviors, and/or skills.
7 steps
In health services research;
Independent variable- health service
Dependent variable- reduction in adverse health effects
if the modality of care is effective
In this situation , environmental and otherfactoors that may influce the relationship are laos taken into account.
Classic health services research into effectiveness, taking into
account the possible influence of environmental and other factors
There are various study designs used in evaluation of health services
study participants are assigned to receive one type of care versus another rather than to receive care versus no care
Where a cause of death falls within both the preventable and amenable definition, all deaths from that cause are counted in both categories when they are presented separately.
Characteristics of indicators:
Valid: should actually measure what they are supposed to measure.
Reliable: answer should be same if measured by different people in same conditions
Sensitive: sensitive to change in situation
Specific: reflect changes only in situation
Feasible : ability to obtain needed data
Relevant :contribute to understanding of phenomenon of interest
Access to Health Services
Persons with medical insurance (AHS-1.1)
Persons with a usual primary care provider (AHS-3)
Oral Health
Children, adolescents, and adults who visited the dentist in the past year (OH-7)
Tobacco
Adult cigarette smoking (TU-1.1)
Adolescent cigarette smoking in past 30 days (TU-2.2)
Reproductive and child helath
ANM- auxiliary nurse midwifery
ANC ante natal checkup
TBA traditional birth attendent
FP family planning
PNC-post natal checkup
ARI acute resp infe
RTI resp tract inf
The North Dakota Oral Health Program works with external evaluators to assess the progress of various program components. Evaluation activities include review of patient data, focus groups, key informant interviews, and survey research. North Dakota’s school-based sealant programs address issues around oral health inequity by improving access to oral health screenings, and both fluoride varnish and dental sealant applications in a school setting. Twenty-one percent of students served were Indigenous; this is notable given that only 5.6% of the total state population includes individuals who are Indigenous. Of those students served, a greater percentage of those who were Indigenous and those who were in kindergarten had not yet visited a dental provider. It is imperative to continue to support and grow participation in the school-based sealant pogrom, and to identify opportunities to increase early dental visits among young children and children who are Indigenous.
This report presents a framework for understanding program evaluation and facilitating
integration of evaluation throughout the public health system. The purposes of
this report are to
· summarize the essential elements of program evaluation;
· provide a framework for conducting effective program evaluations;
· clarify the steps in program evaluation;
· review standards for effective program evaluation; and· address misconceptions regarding the purposes and methods of program
evaluation.
Adhering to these six steps will facilitate an understanding of a program’s context
(e.g., the program’s history, setting, and organization) and will improve how most
evaluations are conceived and conducted.
CDC’s Guidelines for Evaluating Surveillance Systems are being updated to address the need for a) the integration of surveillance and health information systems, b) the establishment of data standards, c) the electronic exchange of health data, and d) changes in the objectives of public health surveillance to facilitate the response of public health to emerging health threats (e.g., new diseases).
Task A. Engage the Stakeholders in the Evaluation
Task B. Describe the Surveillance System to be Evaluated
B.1. Describe the Public Health Importance of the Health-Related Event Under Surveillance
B.2. Describe the Purpose and Operation of the Surveillance System
B.3. Describe the Resources Used to Operate the Surveillance System
Task C. Focus the Evaluation Design
Task D. Gather Credible Evidence Regarding the Performance of the Surveillance System
D.1. Indicate the Level of Usefulness
D.2. Describe Each System Attribute
D.2.a. Simplicity
D.2.b. Flexibility
D.2.c. Data Quality
D.2.d. Acceptability
D.2.e. Sensitivity
D.2.f. Predictive Value Positive
D.2.g. Representativeness
D.2.h. Timeliness
D.2.i. Stability
Task E. Justify and State Conclusions, and Make Recommendations
Task F. Ensure Use of Evaluation Findings and Share Lessons Learned