Putting It All Together:
Producing and Implementing
an Evaluation Plan
Overview*
I. Introduction and Key Considerations
II. What Should Be Included in an Evaluation Plan
III. Developing Tangible Plans for Evaluating Your Programs
IV. Implementing the Plan
V. Reporting the Results
VI. Pitfalls to Avoid and Things to Remember
VII. Using the Evaluation Report to Improve Your Programs
*The following content is based on W.K. Kellogg Foundation Evaluation Handbook (2004), Basic Guide to
Program Evaluation by Carter McNamara and The Program Manager’s Guide to Evaluation, Second Edition,
the U.S. Department of Health and Human Services (2001).
I.INTRODUCTION AND KEY
CONSIDERATIONS
Why Do We Need an Evaluation Plan?
Understanding the Functions of an Evaluation Plan
An evaluation plan will:
• Identify the purpose, users, resources and timelines of the evaluation
• Select the key evaluation questions and indicators and the best design to
measure intended results
• Sequence evaluation activities (such as completing baseline
documentation, conducting participant follow-ups, materials pre-tests,
project reviews and special studies)
• Prepare data collection and data analysis plans, including cost as well as
results or program data
• Plan for communication, dissemination and use of evaluation results
• Identify the technical competencies needed on the evaluation team(s).
Excerpted from http://www.prime2.org/sst/step9-1.html
You need to take a few more steps in order to pull everything
together and develop a tangible plan for evaluating your
programs
• Have you completed the logic model?
• Have you determined what questions need
to be answered ?
• Have you decided on the types of
information you need to collect and data
collection methods?
Are You Ready?
Key Considerations
When designing a program evaluation it is
important to consider the following questions:
• For what purpose is the evaluation being done, i.e., what do you
want to be able to decide as a result of the evaluation?
• What kinds of information are needed to make the decision you
need to make?
• From what sources should the information be collected?
• How can that information be collected in a reasonable fashion?
• When is the information needed (so, by when must it be
collected)?
• What resources are available to collect the information?
What is an Evaluation Plan?
An evaluation plan is a written document that
specifies the evaluation design and details the
procedures for conducting the evaluation.
II. What does an evaluation plan
include?
Typically an evaluation plan includes:
• Executive Summary
• Program objectives and outcomes
• Evaluation questions
• Types of data/information that need to be collected
• Data collection methods
• Timeframe for the evaluation (when collection of evaluation
information will begin and end)
• Staff requirements (who will be involved in the evaluation process)
• Budgeting requirements
• Attachments (surveys, interviews and other relevant information)
II. Developing tangible plans for
evaluating your programs
Steps in Developing an Evaluation Plan
1. Identifying Stakeholders and
Establishing an Evaluation Team
Why?
To ensure that you have gathered multiple
perspectives about the issues that require evaluation.
Who are the stakeholders?
Stakeholders include funders, project staff
and administrators, project participants
community leaders, and others with a direct,
or indirect, interest in program effectiveness.
Steps in Developing an Evaluation Plan
2. Budgeting for an Evaluation
Why?
To ensure that an organization has resources,
including time and money,
to invest in an evaluation
How much to spend on an evaluation ?
An evaluation typically costs 5% to 10% of a project’s cost.
What expenses to include in the budget?
Evaluation staff salary and benefits; Consultant fees; Travel expenses for
staff and/or evaluators; Communications; Printing; Costs of acquiring data
collection instruments and library materials; Supplies and equipment
Steps in Developing an Evaluation Plan
3. Establishing Procedures for Managing
and Monitoring the Evaluation Process
Why?
To ensure consistency, confidentiality and accuracy in the process.
What kind of procedures?
- Appointing (hiring) and training staff that are to be responsible for an
evaluation;
- Deciding where the data that needs to be collected will be stored;
- Developing a data collection manual;
- Developing an evaluation timeline with key deadlines and responsible parties.
Create a Flexible and Responsive
Evaluation Design
Evaluations must be carefully designed
if they are to strengthen project
activities.
The evaluation design should avoid procedures that
require inhibiting controls.
Rather, the design should permit redirection and
revision as appropriate.
Create a Flexible and Responsive
Evaluation Design, contd.
Your evaluation design is flexible if it:
• “Fits” the needs of the target populations and other stakeholders
• Produces data relevant to specific questions and project needs
• Allows revising evaluation questions and plans as project conditions
change
• Is being sensitive to cultural issues in the community
• Fits into the constraints of available resources and allows requesting
additional resources if necessary.
Implementing The Plan
IV. Data Collection and Data
AnaLYSIS
Data
Analysis
Conducting Data Collection and Analysis:
What is Involved?
• Gathering and organizing data in a systematic way to reduce
sources of bias and increase validity
• Discerning patterns, trends and comparisons from qualitative
and quantitative data
• Involving client and stakeholders in interpreting data
• Employing standards to arrive at conclusions
Source: http://www.prime2.org/sst/step9-2.html
Data Collection and Analysis
• Collect only the data you will use and that are relevant to
your
evaluation questions and purposes.
• Involve all staff involved in the data-collection phase in up-
front question formation.
• Revise data-collection strategies based on initial analysis.
What is working? What is not working? What pieces of data
are still missing?
• Base changes to existing tracking/data-collection strategies
on what is learned from evaluation.
V. Reporting the Results
The Evaluation Report
The report should present findings so an audience can
clearly see:
• Changes in performance
• How these changes can be attributed to the interventions
• Cost of the interventions
• If the evaluation design warrants, the report should also
present the effects (if any) of alternative interventions or the
absence of interventions in control areas and discuss
differences between those areas and the “case” area.
Source: http://www.prime2.org/sst/step9-3.html
Communicating Your Evaluation Results
The Evaluation Report Includes:
• Purpose
• Methodology
• Findings
• Conclusions
• Recommendations
VI. Pitfalls to avoid and things to
remember
Pitfalls to Avoid
1. Assuming that the program is the only cause of positive changes
documented. Several factors may be responsible for changes in
participants or in a community
2. Forgetting that the same evaluation method may give different
results when used by different people.
3. Choose groups to compare that are different in too many ways. For
example, gender, age, race, economic status, and many other
factors can all have an impact on project outcomes.
4. Claiming that the results of a small-scale evaluation also apply to a
wide group or geographic area.
Pitfalls to Avoid
Carter McNamara’s Basic Guide to Program Evaluation
5. Balking at evaluation because it seems far too "scientific.“ Usually
the first 20% of effort will generate the first 80% of the plan.
6. Failing to include some interviews in your evaluation methods.
Questionnaires don't capture the story and the story is usually the most
powerful depiction of the benefits of your services.
7. Denying you can learn a great deal about the program by
understanding its failures, dropouts, etc.
Pitfalls to Avoid
8. Throwing away evaluation results once a report
has been generated. Results can provide precious
information later when trying to understand
changes in the program.
Things to Remember
“Remember to ‘collect only the information you
are going to use, and use all the information you
collect.’”
W.K. Kellogg Foundation Evaluation Handbook (2004), p. 99
Things to Remember
“An evaluation report that sits on someone’s shelf will
not lead us to improved program design and
management. Effective program evaluation supports
action.”
W.K. Kellogg Foundation Evaluation Handbook (2004), p. 99
Things to Remember
“There is no "perfect" evaluation design. Don't worry
about the plan being perfect. It's far more important
to do something, than to wait until every last detail
has been tested.”
Carter McNamara’s Basic Guide to
Program Evaluation
VII. Using the evaluation report to
improve your programs
Using The Evaluation Results
The Evaluation Report should be used to:
• Identify strengths and weaknesses of your program or
provide strategies for continuous improvement
• Discover new knowledge about effective practice
• Improve communication and shared understanding
between different stakeholders involved in the program
• Strengthen the organization's position in the community
• Enhance the overall capacity of the organization.

PDE Week 5: Developing an Evaluation Plan

  • 1.
    Putting It AllTogether: Producing and Implementing an Evaluation Plan
  • 2.
    Overview* I. Introduction andKey Considerations II. What Should Be Included in an Evaluation Plan III. Developing Tangible Plans for Evaluating Your Programs IV. Implementing the Plan V. Reporting the Results VI. Pitfalls to Avoid and Things to Remember VII. Using the Evaluation Report to Improve Your Programs *The following content is based on W.K. Kellogg Foundation Evaluation Handbook (2004), Basic Guide to Program Evaluation by Carter McNamara and The Program Manager’s Guide to Evaluation, Second Edition, the U.S. Department of Health and Human Services (2001).
  • 3.
  • 4.
    Why Do WeNeed an Evaluation Plan? Understanding the Functions of an Evaluation Plan An evaluation plan will: • Identify the purpose, users, resources and timelines of the evaluation • Select the key evaluation questions and indicators and the best design to measure intended results • Sequence evaluation activities (such as completing baseline documentation, conducting participant follow-ups, materials pre-tests, project reviews and special studies) • Prepare data collection and data analysis plans, including cost as well as results or program data • Plan for communication, dissemination and use of evaluation results • Identify the technical competencies needed on the evaluation team(s). Excerpted from http://www.prime2.org/sst/step9-1.html
  • 5.
    You need totake a few more steps in order to pull everything together and develop a tangible plan for evaluating your programs • Have you completed the logic model? • Have you determined what questions need to be answered ? • Have you decided on the types of information you need to collect and data collection methods? Are You Ready?
  • 6.
    Key Considerations When designinga program evaluation it is important to consider the following questions: • For what purpose is the evaluation being done, i.e., what do you want to be able to decide as a result of the evaluation? • What kinds of information are needed to make the decision you need to make? • From what sources should the information be collected? • How can that information be collected in a reasonable fashion? • When is the information needed (so, by when must it be collected)? • What resources are available to collect the information?
  • 7.
    What is anEvaluation Plan? An evaluation plan is a written document that specifies the evaluation design and details the procedures for conducting the evaluation.
  • 8.
    II. What doesan evaluation plan include?
  • 9.
    Typically an evaluationplan includes: • Executive Summary • Program objectives and outcomes • Evaluation questions • Types of data/information that need to be collected • Data collection methods • Timeframe for the evaluation (when collection of evaluation information will begin and end) • Staff requirements (who will be involved in the evaluation process) • Budgeting requirements • Attachments (surveys, interviews and other relevant information)
  • 10.
    II. Developing tangibleplans for evaluating your programs
  • 11.
    Steps in Developingan Evaluation Plan 1. Identifying Stakeholders and Establishing an Evaluation Team Why? To ensure that you have gathered multiple perspectives about the issues that require evaluation. Who are the stakeholders? Stakeholders include funders, project staff and administrators, project participants community leaders, and others with a direct, or indirect, interest in program effectiveness.
  • 12.
    Steps in Developingan Evaluation Plan 2. Budgeting for an Evaluation Why? To ensure that an organization has resources, including time and money, to invest in an evaluation How much to spend on an evaluation ? An evaluation typically costs 5% to 10% of a project’s cost. What expenses to include in the budget? Evaluation staff salary and benefits; Consultant fees; Travel expenses for staff and/or evaluators; Communications; Printing; Costs of acquiring data collection instruments and library materials; Supplies and equipment
  • 13.
    Steps in Developingan Evaluation Plan 3. Establishing Procedures for Managing and Monitoring the Evaluation Process Why? To ensure consistency, confidentiality and accuracy in the process. What kind of procedures? - Appointing (hiring) and training staff that are to be responsible for an evaluation; - Deciding where the data that needs to be collected will be stored; - Developing a data collection manual; - Developing an evaluation timeline with key deadlines and responsible parties.
  • 14.
    Create a Flexibleand Responsive Evaluation Design Evaluations must be carefully designed if they are to strengthen project activities. The evaluation design should avoid procedures that require inhibiting controls. Rather, the design should permit redirection and revision as appropriate.
  • 15.
    Create a Flexibleand Responsive Evaluation Design, contd. Your evaluation design is flexible if it: • “Fits” the needs of the target populations and other stakeholders • Produces data relevant to specific questions and project needs • Allows revising evaluation questions and plans as project conditions change • Is being sensitive to cultural issues in the community • Fits into the constraints of available resources and allows requesting additional resources if necessary.
  • 16.
  • 17.
    IV. Data Collectionand Data AnaLYSIS Data Analysis
  • 18.
    Conducting Data Collectionand Analysis: What is Involved? • Gathering and organizing data in a systematic way to reduce sources of bias and increase validity • Discerning patterns, trends and comparisons from qualitative and quantitative data • Involving client and stakeholders in interpreting data • Employing standards to arrive at conclusions Source: http://www.prime2.org/sst/step9-2.html
  • 19.
    Data Collection andAnalysis • Collect only the data you will use and that are relevant to your evaluation questions and purposes. • Involve all staff involved in the data-collection phase in up- front question formation. • Revise data-collection strategies based on initial analysis. What is working? What is not working? What pieces of data are still missing? • Base changes to existing tracking/data-collection strategies on what is learned from evaluation.
  • 20.
  • 21.
    The Evaluation Report Thereport should present findings so an audience can clearly see: • Changes in performance • How these changes can be attributed to the interventions • Cost of the interventions • If the evaluation design warrants, the report should also present the effects (if any) of alternative interventions or the absence of interventions in control areas and discuss differences between those areas and the “case” area. Source: http://www.prime2.org/sst/step9-3.html
  • 22.
    Communicating Your EvaluationResults The Evaluation Report Includes: • Purpose • Methodology • Findings • Conclusions • Recommendations
  • 23.
    VI. Pitfalls toavoid and things to remember
  • 24.
    Pitfalls to Avoid 1.Assuming that the program is the only cause of positive changes documented. Several factors may be responsible for changes in participants or in a community 2. Forgetting that the same evaluation method may give different results when used by different people. 3. Choose groups to compare that are different in too many ways. For example, gender, age, race, economic status, and many other factors can all have an impact on project outcomes. 4. Claiming that the results of a small-scale evaluation also apply to a wide group or geographic area.
  • 25.
    Pitfalls to Avoid CarterMcNamara’s Basic Guide to Program Evaluation 5. Balking at evaluation because it seems far too "scientific.“ Usually the first 20% of effort will generate the first 80% of the plan. 6. Failing to include some interviews in your evaluation methods. Questionnaires don't capture the story and the story is usually the most powerful depiction of the benefits of your services. 7. Denying you can learn a great deal about the program by understanding its failures, dropouts, etc.
  • 26.
    Pitfalls to Avoid 8.Throwing away evaluation results once a report has been generated. Results can provide precious information later when trying to understand changes in the program.
  • 27.
    Things to Remember “Rememberto ‘collect only the information you are going to use, and use all the information you collect.’” W.K. Kellogg Foundation Evaluation Handbook (2004), p. 99
  • 28.
    Things to Remember “Anevaluation report that sits on someone’s shelf will not lead us to improved program design and management. Effective program evaluation supports action.” W.K. Kellogg Foundation Evaluation Handbook (2004), p. 99
  • 29.
    Things to Remember “Thereis no "perfect" evaluation design. Don't worry about the plan being perfect. It's far more important to do something, than to wait until every last detail has been tested.” Carter McNamara’s Basic Guide to Program Evaluation
  • 30.
    VII. Using theevaluation report to improve your programs
  • 31.
    Using The EvaluationResults The Evaluation Report should be used to: • Identify strengths and weaknesses of your program or provide strategies for continuous improvement • Discover new knowledge about effective practice • Improve communication and shared understanding between different stakeholders involved in the program • Strengthen the organization's position in the community • Enhance the overall capacity of the organization.