FUNDAMENTALS
OF
EVALUATION
LEARNING OBJECTIVES
• Evaluation fundamentals
• Developing a logic model
• Understanding evaluation design
• Data analysis approach
WHAT IS EVALUATION?
collecting, analyzing, and using information to answer
questions about projects, policies
and programs, particularly about their effectiveness
and efficiency.
EVALUATION FRAMEWORK
• Assessment of the need for the
program
Need assessment
• Assessment of program design
and logic/theory
Selection of the program
design and logic model
• Assessment of how the program
is being implemented
Implementation plan
• Assessment of the program's
outcome or impact
Impact measurement
• Assessment of the program's
cost and efficiency
Cost and efficiency
measurement
Source - Rossi, Lipsey and Freeman (2004) model
KEY QUESTIONS TO CONSIDER
• What am I going to evaluate?
• What is the purpose of this evaluation?
• Who will use this evaluation? How will they use it?
• What questions is this evaluation seeking to answer?
• What information do I need to answer the questions?
• When is the evaluation needed? What resources do I need?
• How will I collect the data I need?
• How will data be analyzed?
• What is my implementation timeline?
PURPOSES OF EVALUATION
Formative
evaluation
Summative
evaluation
TYPES OF EVALUATION
Process evaluation
Outcome evaluation
Economic evaluation
PROCESS EVALUATION
Key questions to be considered
Process evaluation • How is the program being
• How appropriate are the processes
compared with quality standards?
• Is the program being implemented
correctly?
• Are participants being reached as
intended? How satisfied are program
clients?
• For which clients?
• What has been done in an innovative
way?
OUTCOME EVALUATION (OR IMPACT EVALUATION)
Key questions to be considered
Outcome/Impact evaluation • How well did the program work?
• Did the program produce the intended
outcomes in the short, medium and long
term?
• For whom, in what ways and in what
circumstances?
• What unintended outcomes (positive
negative) were produced?
• To what extent can changes be
to the program?
• What were the particular features of the
program and context that made a
difference?
• What was the influence of other factors?
ECONOMIC EVALUATION
Key questions to be considered
Economic evaluation (cost-
effectiveness analysis and
cost-benefit analysis)
• What is the most cost-effective option?
• Has the intervention been cost-effective
(compared to alternatives)?
• Is the program the best use of
• What has been the ratio of costs to
benefits?
EVALUATION – LOGIC MODEL
A program theory or logic model explains how the
activities of an intervention are understood to contribute
to a chain of results (short-term outputs, medium-term
outcomes) that produce ultimate intended or actual
impacts.
- it is a road map
- it describes the assumptions or hypotheses
- It is a tool that identifies the links in a chain of
reasoning about 'what causes what
COMPONENTS OF A LOGIC MODEL
Source - https://www.health.nsw.gov.au/research/Publications/developing-program-logic.pdf
EXAMPLE – LOGIC MODEL
Source - https://www.health.nsw.gov.au/research/Publications/developing-program-logic.pdf
METHODOLOGY,
DESIGN, DATA
ANALYSIS
PURPOSE OF COMBINING DATA
• Enriching: using qualitative work to identify issues or obtain
information on variables not obtained by quantitative surveys.
• Examining: generating hypotheses from qualitative work to be
tested through the quantitative approach.
• Explaining: using qualitative data to understand unanticipated
results from quantitative data.
• Triangulation (Confirming/reinforcing; Rejecting): verifying or
rejecting results from quantitative data using qualitative data (or
vice versa)
METHODOLOGY
Quantitative
Qualitative
DESIGNING THE EVALUATION
METHODOLOGY
Evaluation
design
Experimental
Quasi-
experimental
Non-
experimental
DATA ANALYSIS
Quantitative studies result in data that provides quantifiable,
objective, and easy to interpret results. The data can typically be
summarized in a way that allows for generalizations that can be
applied to the greater population and the results can be
reproduced.
DATA ANALYSIS APPROACH
Descriptive Inferential
COMMONLY USED DESCRIPTIVE
STATISTICS:
Frequencies – a count of the number of times a particular score or value
is found in the data set
Percentages – used to express a set of scores or values as a percentage
of the whole
Mean – numerical average of the scores or values for a particular
variable
Median – the numerical midpoint of the scores or values that is at the
center of the distribution of the scores
Mode – the most common score or value for a particular variable
Minimum and maximum values (range) – the highest and lowest values
or scores for any variable
REFERENCES:
• https://www.betterevaluation.org/en
• https://www.lga.sa.gov.au/page.aspx?u=2297
• https://www.health.nsw.gov.au/research/Publications/developing-program-logic.pdf
• https://cirt.gcu.edu/research/developmentresources/research_ready/quantresearch/analyze_data

Fundamentals of Program Evaluation

  • 1.
  • 2.
    LEARNING OBJECTIVES • Evaluationfundamentals • Developing a logic model • Understanding evaluation design • Data analysis approach
  • 3.
    WHAT IS EVALUATION? collecting,analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.
  • 4.
    EVALUATION FRAMEWORK • Assessmentof the need for the program Need assessment • Assessment of program design and logic/theory Selection of the program design and logic model • Assessment of how the program is being implemented Implementation plan • Assessment of the program's outcome or impact Impact measurement • Assessment of the program's cost and efficiency Cost and efficiency measurement Source - Rossi, Lipsey and Freeman (2004) model
  • 5.
    KEY QUESTIONS TOCONSIDER • What am I going to evaluate? • What is the purpose of this evaluation? • Who will use this evaluation? How will they use it? • What questions is this evaluation seeking to answer? • What information do I need to answer the questions? • When is the evaluation needed? What resources do I need? • How will I collect the data I need? • How will data be analyzed? • What is my implementation timeline?
  • 6.
  • 7.
    TYPES OF EVALUATION Processevaluation Outcome evaluation Economic evaluation
  • 8.
    PROCESS EVALUATION Key questionsto be considered Process evaluation • How is the program being • How appropriate are the processes compared with quality standards? • Is the program being implemented correctly? • Are participants being reached as intended? How satisfied are program clients? • For which clients? • What has been done in an innovative way?
  • 9.
    OUTCOME EVALUATION (ORIMPACT EVALUATION) Key questions to be considered Outcome/Impact evaluation • How well did the program work? • Did the program produce the intended outcomes in the short, medium and long term? • For whom, in what ways and in what circumstances? • What unintended outcomes (positive negative) were produced? • To what extent can changes be to the program? • What were the particular features of the program and context that made a difference? • What was the influence of other factors?
  • 10.
    ECONOMIC EVALUATION Key questionsto be considered Economic evaluation (cost- effectiveness analysis and cost-benefit analysis) • What is the most cost-effective option? • Has the intervention been cost-effective (compared to alternatives)? • Is the program the best use of • What has been the ratio of costs to benefits?
  • 11.
    EVALUATION – LOGICMODEL A program theory or logic model explains how the activities of an intervention are understood to contribute to a chain of results (short-term outputs, medium-term outcomes) that produce ultimate intended or actual impacts. - it is a road map - it describes the assumptions or hypotheses - It is a tool that identifies the links in a chain of reasoning about 'what causes what
  • 12.
    COMPONENTS OF ALOGIC MODEL Source - https://www.health.nsw.gov.au/research/Publications/developing-program-logic.pdf
  • 13.
    EXAMPLE – LOGICMODEL Source - https://www.health.nsw.gov.au/research/Publications/developing-program-logic.pdf
  • 14.
  • 15.
    PURPOSE OF COMBININGDATA • Enriching: using qualitative work to identify issues or obtain information on variables not obtained by quantitative surveys. • Examining: generating hypotheses from qualitative work to be tested through the quantitative approach. • Explaining: using qualitative data to understand unanticipated results from quantitative data. • Triangulation (Confirming/reinforcing; Rejecting): verifying or rejecting results from quantitative data using qualitative data (or vice versa)
  • 16.
  • 17.
  • 18.
    DATA ANALYSIS Quantitative studiesresult in data that provides quantifiable, objective, and easy to interpret results. The data can typically be summarized in a way that allows for generalizations that can be applied to the greater population and the results can be reproduced.
  • 19.
  • 20.
    COMMONLY USED DESCRIPTIVE STATISTICS: Frequencies– a count of the number of times a particular score or value is found in the data set Percentages – used to express a set of scores or values as a percentage of the whole Mean – numerical average of the scores or values for a particular variable Median – the numerical midpoint of the scores or values that is at the center of the distribution of the scores Mode – the most common score or value for a particular variable Minimum and maximum values (range) – the highest and lowest values or scores for any variable
  • 21.
    REFERENCES: • https://www.betterevaluation.org/en • https://www.lga.sa.gov.au/page.aspx?u=2297 •https://www.health.nsw.gov.au/research/Publications/developing-program-logic.pdf • https://cirt.gcu.edu/research/developmentresources/research_ready/quantresearch/analyze_data

Editor's Notes

  • #7 Formative evaluation refers to evaluation conducted to inform decisions about improvement.  It can provide information on how the program might be developed (for new programs) or improved (for both new and existing programs). It is often done during program implementation to inform ongoing improvement, usually for an internal audience. Formative evaluations use process evaluation but can also include outcome evaluation, particularly to assess interim outcomes. Summative evaluation refers to evaluation to inform decisions about continuing, terminating or expanding a program.  It is often conducted after a program is completed (or well underway) to present an assessment to an external audience.  Although summative evaluation generally reports when the program has been running long enough to produce results, it should be initiated during the program design phase. Summative evaluations often use outcome evaluation and economic evaluation but could use process evaluation, especially where there are concerns or risks around program processes.
  • #8 Process evaluation  Investigates how the program is delivered, including efficiency, quality and customer satisfaction. May consider alternative delivery procedures. It can help to differentiate ineffective programs from failures of implementation. As an ongoing evaluative strategy, it can be used to continually improve programs by informing adjustments to delivery. Outcome evaluation (or impact evaluation)  Determines whether the program caused demonstrable effects on specifically defined target outcomes. Identifies for whom, in what ways and in what circumstances the outcomes were achieved. Identifies unintended impacts (positive and negative). Examines the ways the program contributed to the outcomes, and the influence of other factors. Economic evaluation Addresses questions of efficiency by standardising outcomes in terms of their dollar value to answer questions of value for money, cost-effectiveness and cost-benefit. These types of analyses can also be used in formative stages to compare different options.
  • #18 Experimental design is considered the strongest methodology for demonstrating a causal relationship between pre-defined program activities and outcomes. It measures changes in the desired outcome for participants in an ‘intervention’ group and those in a ‘control’ group. Participants are randomly assigned meaning there is negligible systematic difference between the groups. Results are therefore independent of selection processes and any associated bias. Quasi-experimental designs are typically used when experimental designs are not feasible or ethical, but some form of control group is possible. High quality quasi-experimental designs can show a causal link between pre-defined program activities and outcomes. These methods compare outcomes for program participants, either against a non-random control group or at different phases in the rollout of a program, as in multiple baseline design. Non-experimental design is also referred to as descriptive or observational studies. They do not use a control group, but instead examine changes in participants before and after program implementation, or rely only on qualitative data, such as client and stakeholder interviews or expert opinion.