Evaluation for researchers is an important tool in assessing the merit of public and charitable services that everyone can use, and identifying ways in which those services could be improved.
Dr Helen Kara, an evaluation research specialist, presents the key elements of good practice at each stage of the evaluation process, helping you to better understand your research.
To learn more about evaluation download Helen's eBook: Beginners’ Guide to Evaluation - http://bit.ly/1Kr0vsG
3. Teaches evaluation research: UK central and
local government departments, charities,
universities.
Writes on evaluation research: academic
journal articles, university working papers.
Does evaluation research: since 1999, for
national, regional, and local organisations
and partnerships.
4. Why is evaluation different from
other research?
1. Evaluation research is designed to
assess the value of a service or an
intervention.
2. Evaluation research is not usually
subject to ethical review.
3. Evaluation research makes
recommendations for improvement
based on its findings.
8. Evaluation and
Ethical Review
Much research with human participants is
subject to ethical review.
Evaluation research is rarely subject to
ethical review.
Evaluation researchers must act ethically
at all stages of their work.
9. Evaluation Ethics:
Five Key Principles
Principle 1: Evaluation research should
be systematic and based on data.
Evaluation results should be accurate,
understandable, and believable.
10. Evaluation Ethics:
Five Key Principles
Principle 2: Evaluation research must be
done competently.
Evaluators need relevant knowledge
and skills. An experienced evaluator
may act as a mentor.
11. Evaluation Ethics:
Five Key Principles
Principle 3: Evaluators should act with
honesty and integrity.
Different people have different
priorities; evaluators should be as
independent as possible.
12. Evaluation Ethics:
Five Key Principles
Principle 4: Evaluators should respect
the autonomy and dignity of others.
This applies to everyone, regardless of
their age, gender, ethnicity, sexual
orientation, life choices, etc.
13. Evaluation Ethics:
Five Key Principles
Principle 5: Evaluation research should
work for social justice.
Evaluation recommendations should
work towards a fairer distribution of
privilege and opportunity.
14. Are you an insider evaluator
or an outsider evaluator?
Insider evaluator: for example, someone who works in, or
uses, the service; or someone who receives, or has
received, the intervention.
Outsider evaluator: for example, someone who does not
work in or use the service; or someone who has not
received the intervention.
15. Insider Evaluator
More knowledge of the service or
intervention.
Can be easier to gain access to information.
Can be harder to maintain independence.
May experience role confusion.
16. Outsider Evaluator
Less knowledge of the service or
intervention.
Can be harder to gain access to
information.
Should be easier to maintain independence.
No role confusion.
17. How the
decision is made
Ideal world: decision made solely with
reference to the needs of the
evaluation research.
Real world: decision made on basis of
factors such as budget and availability.
18. What is the context for your evaluation?
Where is your evaluation located in place and
time?
What human and financial resources do you
have for your evaluation?
Who is funding and supporting the evaluation?
Is anyone opposing the evaluation?
What political influences are likely to affect your
evaluation?
19. Involving others in your evaluation
Three levels of involvement:
1. No involvement - evaluator does all the work.
2. Participation - other people join in and do some
of the work.
3. Collaboration - other people work in partnership
with the evaluator throughout the process.
20. Involving others in your evaluation
Pros and cons:
Cons - involvement is resource-intensive: the more
involvement, the more time and money you will
need.
Pros - involvement leads to better quality findings
and recommendations, and can be highly ethical.
22. Defining specific evaluation questions
Think about overarching questions (what works well,
what could be improved) and your evaluation context.
Example questions:
How often, and for how long, is the intervention
received?
How satisfied are users with the service?
Does the intervention have any unexpected effects?
Is there anything that prevents people from using the
service?
23. Collecting secondary data
Data originally collected or created for
another purpose.
Examples:
Project documents
Meeting minutes
Monitoring data – service take-up,
attendance levels, etc.
24. Collecting primary data
Data collected or created for the evaluation.
Examples:
Questionnaire surveys
Interview/focus group notes or transcripts
Photographs
Drawings, paintings, collages etc.
25. Analysing quantitative data
Inferential statistics rarely used, as samples are
not usually random.
Use descriptive statistics:
Average
Range
Percentage
What do they tell you?
27. Synthesising data
Look at the findings from each dataset.
What is only in one dataset?
What is in more than one dataset?
What does that tell you?
31. Disseminate findings
Share locally and more widely.
Consider using different methods, e.g:
Internet (email, website, social media)
Face-to-face presentation
Mainstream media (local papers, radio)
32. What’s the point?
Evaluation findings can be predictable
BUT arrived at systematically and
rigorously
AND based on firm evidence, not hearsay
or conjecture
SO far more likely to influence funders.
33. Evaluation cycle
Evaluation research is not isolated.
It is applied research, designed to
create improvements to services and
interventions, and so to society.