3. Why do we need to start
with a clear definition?
Source: Hobbies on a Budget / Flickr
Make Decision
4. Why do we need to start
with a clear definition?
Source: Hobbies on a Budget / Flickr
Frame Decision Make Decision
5. Why do we need to start
with a clear definition?
Source: Hobbies on a Budget / Flickr
Frame Decision Make Decision
Design Evaluation
6. Why do we need to start
with a clear definition?
Source: Hobbies on a Budget / Flickr
Frame Decision Make Decision
Frame Evaluation Design Evaluation
7. Four evaluation tasks in FRAMING
Identify
primary
intended
users
Decide
purpose(s)
(intended
uses)
Specify
key
evaluation
questions
Determine
what
‘success’
looks like
11. Formative – improve
it
Summative –
continue or stop it
Broader evidence
base
Purposes
(intended uses)
Image source: CK-CO138 - Charlotte Kesl / World Bank
Lobby and advocate
15. Descriptive:
How many children attend?
What learning tools are used?
Has learning improved?
Key evaluation
questions
Image source: ML030S09 - Curt Carnemark / World Bank
17. Synthesis:
Has the program been a success?
Is it Value For Money compared
to alternatives?
Key evaluation
questions
18. Action:
How can the program be
improved? Should it
continue?
Key evaluation
questions
19. MANAGE
DEFINE
FRAME
DESCRIBE
UNDERSTAND
CAUSES
SYNTHESIZE
REPORT &
SUPPORT USE
Descriptive Questions-
What were the activities, changes, context?
Causal questions –What caused or
contributed to the identified changes?
Synthesis questions –
Overall was it good? Value for money?
Action questions-
What should we do?
Options for answering different types of questions
Today’s webinar looks at framing an evaluation – tasks that need to be done before developing a design for the evaluation
The last webinar, on Define, looked at what is being evaluated. This webinar looks at why.
The idea of framing an evaluation draws on research into decision making which found that effective decision makers don’t rush to make the decision,
..but stop to frame it. They make sure they are clear about what needs to be taken into account, including who needs to be involved in making the decision, and when it is needed.
With evaluation, there is often pressure to rush to design an evaluation – to choose metrics or aresearch design.
.. But it is important to frame it, to be clear about what needs to be taken into account when designing it – and in particular to be clear about the intended uses of the evaluation.
There are four tasks in framing that need to be addressed before you can design an evaluation. The first two are related – who is this evaluation for? And how are they going to use it? From this comes the third task – specifying key evaluation questions. And the fourth one is about values – determining what success looks like.
Let’s look at each of these in turn.
The first task is to identify the primary intended users. These are the specific people whose needs the evaluation is intended to meet. In some cases it is possible to identify them personally and involve them in the decision making about an evaluation. In other cases this is not possible.
In the Manage cluster of the framework we talk about ways of engaging stakeholders, including intended users, in the evaluation.
For example, in an evaluation of an educational program, the primary intended users might be the families of children attending the school, or the classroom teachers, or the school principal, or the School Council, or the education department, or other schools. Each of these potential users will have different information needs. It is important to be clear about whose needs the evaluation is intended to meet.
The second task is to decide the purpose or purposes of the evaluation – its specific intended uses.
An evaluation of an educational program might have purposes related to how the findings will be used. It might be formative– designed to improve how it is being implemented. It might be summative– designed to inform a decision about whether or not to keep funding it. Robert Stake has characterised these important differences in terms of a metaphor – when the cook tastes it, that’s formative; when the customer tastes it, that’s summative.
Evaluation findings might be intended to contribute to the broader evidence base.
Or the main purpose of an evaluation might be to advocate for a program or an organization – to justify expenditure and highlight achievements.
It might have other purposes that relate to the impact of the process of undertaking the evaluation. This might be a way of giving a voice to less powerful individuals and groups, such as families. It might be intended to improve trust between different stakeholders.
Or it might be intended to provide accountability – to improve performance by providing oversight and consequences.
These different purposes will lead to different evaluation questions being asked and different information being needed.
.
The first two tasks are interconnected and where you start will vary. Sometimes the primary intended users have already been identified, and the next task is to identify how they intend to use the evaluation. Other times the purpose of the evaluation has been identified and the next task is to identify the primary intended users. For example if an evaluation is being conducted to improve how a program is working, we should identify who would make decisions about changing the way things are done, and would therefore be the intended users.
The third task is to specify a small number of key evaluation questions. These are not interview questions or items on a questionnaire. They are the high level questions that an evaluation aims to answer. It’s helpful to distinguish between different types of evaluation questions
Descriptive questions ask about what has happened. What is the situation? What changes have occurred?
Causal questions ask what caused or contributed to the results?
So in an educational program, we might be interested in knowing if it was the program that caused or contributed to improved learning.
Synthesis questions combine the answers from descriptive and causal questions together with values to form an overall evaluative judgement. Was the program a success?
Did it produce good enough results? Was it value for money?
Action questions build on the answers to the previous questions and ask about what should be done.
These are often expressed as recommendations.
These four different types of questions are picked up in the rest of the rainbow framework
The next three webinars in this series set out options for answering these different types of evaluation questions.
The final task in framing is to clarify the values that underpin the evaluation. In the framework we have referred to this as determining what success looks like.
We can think about success in terms of three desirable features of programs. The first is processes. For example, in an education program we might look at how the students are treated by the teachers and by each other.
The second aspect is outcomes. In an educational program we might be interested to know whether the students learn to read.
The third aspect of success is about the distribution of benefits and costs. For example, especially in equity-focused evaluation, we’re not so much interested in the average impact of a program. We’re more interested in finding out if the most disadvantaged groups benefited. In many programs, we want to know if there is gender equity in access and outcomes.
And finally, we should think about what success looks like in terms of both criteria and standards. The criteria might refer to good reading levels. But what standard will constitute “good” – better than before, better than average? Or up to the national benchmark?
So that’s an overview of the Frame cluster of tasks in an evaluation.
Addressing these four tasks is part of developing the brief for an evaluation – what it needs to do .
On the site you’ll find information about each of these tasks, with links to useful resources. Some of the tasks only have resources, and others have different options.
For example there are many different options for determining what success looks like. Here are some of the options for doing this based on formal statements of values, on articulating tacit, or undocumented, values, and ways of negotiating between values. And there are some approaches to evaluation (packages of options) which have a particular focus on this issue.
For each option you’ll find a brief description, and links to useful resources.
In this presentation I’ve used a simple hypothetical example. If you’d like to see a more detailed description of a real evaluation, check out the case studies of evaluations on the site. For example the case of BioNET - the global network for taxonomy.
Here is the link for the FRAME page.
You’ll find this two page summary there as well that you can download