17. Evaluation is the activity of
systematically collecting, analyzing
and reporting information so that it
can be used by appropriate people to
answer their questions so that they
can make decisions about the future of
a project or program.
18. • Who carries out the ‘activity’?
• Who or what is an evaluator?
• How does an activity become ‘systematic’?
• What activites and skills do ‘collecting, analysing and reporting’
involve?
• How do you go about ‘collecting, analysing and reporting’?
• Is that an exhaustive list of ‘the activity’ that is called ‘evaluation’
• What kind of ‘information’ is being referred to?
• What is the thing or things you collect information about?
• Who should be involved in an evaluation?
• Who are the ‘appropriate people’ who ‘use’ the ‘information’
supplied by an evaluation?
• And for what purposes can they ‘use’ this information?
• What kinds of questions can they ask?
19.
20. Rainbow Framework
Over 200
methods/options
related to 35 tasks
in 7 clusters
21.
22.
23.
24.
25.
26.
27.
28. Rainbow Framework
Over 200
methods/options
related to 35 tasks
in 7 clusters
32. Evaluation Competencies
1. Reflective Practice
2. Technical Practice
3. Situational Practice
4. Management Practice
5. Interpersonal Practice
Competencies for Canadian Evaluation Practice, Canadian Evaluation Society
33. Evaluation Standards
1. Utility Standard
2. Feasibility Standards
3. Propriety Standards
4. Accuracy Standards
5. Evaluation Accountability
Standards
The Program Evaluation Standards developed by the JCSEE
34. How to navigate the maze of
evaluation methods and approaches
A BetterEvaluation Podcast
BetterEvaluation.org Simon Hearn & Ron Mackay
Editor's Notes
There is increasing recognition of the importance of evaluation. But when NGOs or government agencies start to plan an evaluation, or to engage an evaluator, it is difficult to find accessible, comprehensive and credible information about how to choose the right combination of evaluation methods, and how to implement them well – or how to manage these issues with a potential evaluator. Many guides to evaluation focus on only a few types of data collection methods and a few types of research designs. Few guides cover newer types of data collection (such as GIS logging on mobile phones, or tracking social media mentions), designs and strategies that can be used when experimental or quasi-experimental research designs are not feasible or appropriate, and methods to address issues such as clarifying and negotiating the values that should underpin the evaluation. There is information about some of these available on various web sites, but they can be difficult to locate, of varying quality, and not co-ordinated. And much of the information is partisan for particular methods, or does not include examples from practice.
There is increasing recognition of the importance of evaluation. But when NGOs or government agencies start to plan an evaluation, or to engage an evaluator, it is difficult to find accessible, comprehensive and credible information about how to choose the right combination of evaluation methods, and how to implement them well – or how to manage these issues with a potential evaluator. Many guides to evaluation focus on only a few types of data collection methods and a few types of research designs. Few guides cover newer types of data collection (such as GIS logging on mobile phones, or tracking social media mentions), designs and strategies that can be used when experimental or quasi-experimental research designs are not feasible or appropriate, and methods to address issues such as clarifying and negotiating the values that should underpin the evaluation. There is information about some of these available on various web sites, but they can be difficult to locate, of varying quality, and not co-ordinated. And much of the information is partisan for particular methods, or does not include examples from practice.
There is increasing recognition of the importance of evaluation. But when NGOs or government agencies start to plan an evaluation, or to engage an evaluator, it is difficult to find accessible, comprehensive and credible information about how to choose the right combination of evaluation methods, and how to implement them well – or how to manage these issues with a potential evaluator. Many guides to evaluation focus on only a few types of data collection methods and a few types of research designs. Few guides cover newer types of data collection (such as GIS logging on mobile phones, or tracking social media mentions), designs and strategies that can be used when experimental or quasi-experimental research designs are not feasible or appropriate, and methods to address issues such as clarifying and negotiating the values that should underpin the evaluation. There is information about some of these available on various web sites, but they can be difficult to locate, of varying quality, and not co-ordinated. And much of the information is partisan for particular methods, or does not include examples from practice.