Evaluation Design and its methods
Evaluation Design:
Experimental
Quasi Experimental
Observational
1.Experimental Evaluation Design
This involves the random allocation of participants into intervention and control groups . The intervention group receives the health education or health promotion intervention while the control group doesnot. The difference in outcome between the two groups is then compared to determine the effectiveness of the intervention.
Example: A group of leaders is identified as appropriate for a leadership development program. The group is randomly divided into two cohorts with one group participating in the leadership development program first. Those participating in the program are compared to the group that has not yet participated in the program.
2.Quasi-Experimental Design.
This involves the comparision of outcomes before and after the intervention .
Participants are not randomly allocated into groups , but the intervention group is compared to a control group or to the same group before the intervention.
Example:
Leaders choose whether to participate in a leadership development program or not. Those participating in the program are compared to themselves before the program or to other groups who did not participate in the program.
A quasi-experimental design is a non-randomized study design used to evaluate the effect of an intervention.
Unlike a true experiment, in a quasi-experimental study the choice of who gets the intervention and who doesn’t is not randomized.
Instead, the intervention can be assigned to participants according to their choosing or that of the researcher, or by using any method other than randomness.
Having a control group is not required, but if present, it provides a higher level of evidence for the relationship between the intervention and the outcome.
Because participants will not be assigned at random and the control group is optional, a quasi-experimental design may suffer from:
Confounding:
As the initial characteristics of the participants may provide an alternative explanation of the outcome.
Bias:
These are alternative explanations of the outcome such as: natural progression, outside events, differential selection of participants, etc.
3.Observational Evaluation Design:
This involves the observational and recording of behaviours, attitude or outcomes with intervening. This design is useful for evaluating the impact of large-scale health education or health promotion intervention.
The investigator doesnot intervene and rather simply observes and assess the strength of the relationship between exposure and outcome.
For eg –
A health education program was conducted on Handwashing in Gramthan Municipality . Observational evaluation include whether the resident of Gramthan municipality was their hand after toilet or after working in dirt.
A health education program conducted on Balance diet to the postpartum mothers.A observational evaluation include whether the bab
3. 1.Experimental Evaluation Design
• This involves the random allocation of participants into
intervention and control groups . The intervention group receives
the health education or health promotion intervention while the
control group doesnot. The difference in outcome between the
two groups is then compared to determine the effectiveness of the
intervention.
5. • Example: A group of leaders is identified as appropriate for a
leadership development program. The group is randomly divided
into two cohorts with one group participating in the leadership
development program first. Those participating in the program
are compared to the group that has not yet participated in the
program.
6. 2.Quasi-Experimental Design.
• This involves the comparision of outcomes before and after the intervention .
• Participants are not randomly allocated into groups , but the intervention
group is compared to a control group or to the same group before the
intervention.
Example:
Leaders choose whether to participate in a leadership development program or
not. Those participating in the program are compared to themselves before the
program or to other groups who did not participate in the program.
7. • A quasi-experimental design is a non-randomized study design used to
evaluate the effect of an intervention.
• Unlike a true experiment, in a quasi-experimental study the choice of who gets
the intervention and who doesn’t is not randomized.
• Instead, the intervention can be assigned to participants according to their
choosing or that of the researcher, or by using any method other than
randomness.
• Having a control group is not required, but if present, it provides a higher level
of evidence for the relationship between the intervention and the outcome.
8. Because participants will not be assigned at random and the control
group is optional, a quasi-experimental design may suffer from:
1. Confounding:
As the initial characteristics of the participants may provide an
alternative explanation of the outcome.
2. Bias:
These are alternative explanations of the outcome such as:
natural progression, outside events, differential selection of
participants, etc.
10. Experimental Study (a.k.a. Randomized
Controlled Trial)
Quasi-Experimental Study
Objective Evaluate the effect of an intervention or a treatment Evaluate the effect of an intervention or a treatment
How participants get
assigned to groups?
Random assignment
Non-random assignment (participants get assigned
according to their choosing or that of the researcher)
Is there a control group? Yes
Not always (although, if present, a control group will
provide better evidence for the study results)
Is there any room for
confounding?
No (although check Manson et al. for a detailed
discussion on post-randomization confounding in
randomized controlled trials)
Yes (however, statistical techniques can be used to
study causal relationships in quasi-experiments)
Level of evidence
A randomized trial is at the highest level in the
hierarchy of evidence
A quasi-experiment is one level below the
experimental study in the hierarchy of evidence
[source]
Advantages Minimizes bias and confounding
– Can be used in situations where an experiment is not
ethically or practically feasible
– Can work with smaller sample sizes than
randomized trials
Limitations
– High cost (as it generally requires a large sample
size)
– Ethical limitations
– Generalizability issues
– Sometimes practically infeasible
Lower ranking in the hierarchy of evidence as losing
the power of randomization causes the study to be
more susceptible to bias and confounding
11. 3.Observational Evaluation Design:
This involves the observational and recording of behaviours, attitude or outcomes with
intervening. This design is useful for evaluating the impact of large-scale health education
or health promotion intervention.
The investigator doesnot intervene and rather simply observes and assess the strength of
the relationship between exposure and outcome.
For eg –
A health education program was conducted on Handwashing in Gramthan Municipality .
Observational evaluation include whether the resident of Gramthan municipality was their
hand after toilet or after working in dirt.
A health education program conducted on Balance diet to the postpartum mothers.A
observational evaluation include whether the baby is of healthy weight or not just by
observing.
12. Methods of Evaluation
• Quantitative method of Evaluation
• Qualitative method of Evaluation
13. Quantitative method of Evaluation
• Quantitative evaluation is outcome-oriented. You will need to
have predefined outcomes for your project. You will then test to see
how your program is doing with respect to these outcomes using
numerical data.
• Quantitative methods involve the use of numerical data to analyze and
interpret information. The information obtained produces data that can
be counted, categorized, measured, or ranked.
• This information can be evaluated using statistical analysis which
offers the opportunity to dig deeper into the data and look for the
meaning behind it.
14. • Typically, rating scales or closed questions are used to generate
quantitative data as these produce either numerical data or data that
can be put into categories (e.g. "yes' or "no" questions).
• After collection, this data can then be evaluated using statistical
analysis and easily placed into graphs and tables. The results from
quantitative methods are easy to summarize, compare, and generalize.
• Tools- questionnaire ,checklist
16. • Quantitative methods can answer such questions as;
• How many people attended?
• How much did the training program cost?
• How many workshops did you complete?·
• How many people completed or passed the program?
• What were the assessments outcomes?
17. Qualitative methods
• Qualitative methods involve gathering information that is not in numerical
form.
• It is descriptive data of events, people, situations, and observed behaviors.
• It is typically opinions, beliefs, and attitudes of individuals who attended a
training program or those impacted by a program.
• The questions and methods used to gather qualitative data tend to be open
ended and less structured, thus harder to measure than quantitative data.
• However, the data is helpful as it can provide contextual information to
clarify potential issues by explaining the "why" and "how" behind the
issues.
18. Qualitative data methods can answer such questions as;
What did participants get out of the program?
Why did participants feel the program was beneficial?
How will participants be able to use the information provided?
What are some challenges participants found with the program?
19. Qualitative Data Collection Methods
In-depth interviews
Observation methods
Document review
Focus groups discussion