This document summarizes an academic presentation on evaluating mental health and greenspace projects. It discusses the speaker's research center's focus on physical activity, diet, and health. It outlines reasons for doing evaluation, including improving services and demonstrating impact. It also describes different types of evaluation like process, outcome, and economic evaluations. Several example evaluation projects are mentioned, including walking groups through GP practices, the health effects of community gardening, and indoor versus outdoor exercise referral schemes. Good evaluation resources from organizations are also listed.
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
Setting the Stage for Health Systems Strengthening: What We Have Learned from...Health Systems 20/20
The HSA approach is an innovative tool designed to provide a rapid yet comprehensive assessment of a country’s health system. With experience in conducting HSAs in Asia, Africa, and the Caribbean, presenters will highlight how HSA results are being used to strengthen health systems.
Presenter: Michael Rodriguez, Danielle Altman
This guide has been produced for Our Place areas who are implementing their Operational Plans, to support you to explore the reasons and uses for evaluation, and why it might help to add value to your work. It explores the principles that underpin robust (but realistic) evaluation, presenting guidelines that you can use to inform the development of your own evaluation plan.
Project output versus influence in practice: impact as a dimension of resea...Hazel Hall
Hazel Hall's invited keynote paper presented at the 6th International Evidence Based Library and Information Practice (EBLIP6) conference, Manchester, 30 June 2011. A journal article that draws on the some of the content of this presentation is also available at http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/12138/9373
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
Setting the Stage for Health Systems Strengthening: What We Have Learned from...Health Systems 20/20
The HSA approach is an innovative tool designed to provide a rapid yet comprehensive assessment of a country’s health system. With experience in conducting HSAs in Asia, Africa, and the Caribbean, presenters will highlight how HSA results are being used to strengthen health systems.
Presenter: Michael Rodriguez, Danielle Altman
This guide has been produced for Our Place areas who are implementing their Operational Plans, to support you to explore the reasons and uses for evaluation, and why it might help to add value to your work. It explores the principles that underpin robust (but realistic) evaluation, presenting guidelines that you can use to inform the development of your own evaluation plan.
Project output versus influence in practice: impact as a dimension of resea...Hazel Hall
Hazel Hall's invited keynote paper presented at the 6th International Evidence Based Library and Information Practice (EBLIP6) conference, Manchester, 30 June 2011. A journal article that draws on the some of the content of this presentation is also available at http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/12138/9373
Program Evaluation Studies TK Logan and David Royse .docxstilliegeorgiana
Program Evaluation
Studies
TK Logan and David Royse
A
variety of programs have been developed to address social problems such
as drug addiction, homelessness, child abuse, domestic violence, illiteracy,
and poverty. The goals of these programs may include directly addressing
the problem origin or moderating the effects of these problems on indi-
viduals, families, and communities. Sometimes programs are developed
to prevent something from happening such as drug use, sexual assault, or crime.
These kinds of problems and programs to help people are often what allracts many
social workers to the profession; we want to be part of the mechanism through which
society provides assistance to those most in need. Despite low wages, bureaucratic red
tape, and routinely uncooperative clients, we tirelessly provide services tha t are invaluable
but also at various Limes may be or become insufficient or inappropriate. But without
conducting eva luation, we do not know whether our programs are helping or hurting,
that is, whether they only postpone the hunt for real solutions or truly construct new
futures for our clients. This chapter provides an overview of program evaluation in gen -
eral and outlines the primary considerations in designing program evaluations.
Evaluation can be done informally or formally. We are constantly, as consumers, infor-
mally evaluating products, services, and in formation. For example, we may choose not to
return to a store or an agency again if we did not evaluate the experience as pleasant.
Similarl y, we may mentally take note of unsolicited comments or anecdotes from clients and
draw conclusions about a program. Anecdotal and informal approaches such as these gen-
erally are not regarded as carrying scientific credibility. One reason is that decision biases
play a role in our "informal" evaluation. Specifically, vivid memories or strongly negative or
positive anecdotes will be overrepresented in our summaries of how things are evaluated.
This is why objective data are necessary to truly understand what is or is not working.
By contrast, formal evaluations systematically examine data from and about programs
and their outcomes so that better decisions can be made about the interventions designed
to address the related social problem. Thus, program evaluation involves the usc of social
research meLhodologies to appraise and improve the ways in which human services, poli-
ci~s, and programs are co nducted. Formal eva l.uation, by its very nature, is applied research.
Formal program evaluations attempt to answer the following general ques tion: Does
the p rogram work? Program evaluation may also address questions such as the following:
Do our clients get better? How does our success rate compare to those of other programs
or agencies? Can the same level of success be obtained through less expensive means?
221
222 PART II • QUANTITATIVE A PPROACHES: TYPES OF STUD IES
What is the expe ...
Addressing the Risks & Opportunities of Implementing an Outcomes Based Strategy Blackbaud Pacific
Presented by Brenda Dolieslager, Registered Psychologist & Outcome Measurement Consultant
In this webinar Brenda looks at the risks & opportunities that come with implementing an outcomes based strategy.
By watching this webinar you will:
• Learn what is and is not required to successfully adopt an outcomes based strategy.
• Understand how you are positioned to adopt an outcomes based strategy and what should be your next steps
• Assess the risks involved and learn how they can be mitigated
• Be armed with information to commence of further internal and external conversations around outcomes based strategies.
To view the full webinar please visit: https://www.blackbaud.com.au/notforprofit-events/webinars/past
1. The role of evaluation in
mental health and
greenspace
Dr Ruth Jepson
Co-Director, Centre for Population and Public
Health Research, University of Stirling
Lead for Physical Activity, Diet and Health
Research Programme
2. Outline of talk
he work at the Centre for Public Health and Population Health
Research
hy do evaluation?
oints to consider in evaluation
ome types of evaluation techniques and three examples
ood sources of support/toolkits/ training available to projects wishing
to evaluate what they do
3. Centre for Public Health &
Population Health Research
Programme on physical activity and diet
Focus on:
•Promoting physical and healthy diet as part of everyday
behaviour
•Promoting physical activity through health professional
referrals
•Understanding the barriers to physical activity and healthy
eating in different population groups (including ethnic minority
groups)
•Encouraging people to use the outdoors to increase their
feelings of health and wellbeing (including walking and
gardening)
Three relevant evaluation projects:
•Walking groups via a GP practice
•Health effects of community gardening
•Indoor versus outdoor activities via Exercise Referral Schemes
4. Why do evaluation?
Evaluation can be powerful and exciting! (Hmm..)
It can help you:
•Improve services
•Understand what works and what doesn’t
•Demonstrate the difference that a project makes
•Make decisions about the best use of funding
•Have evidence for policy and decision-making
(Evaluation Support Scotland)
Can be your most important and influential tool for getting new
or sustained funding - funders want evidence of what works
5. Who is evaluation for?
Evaluation Stakeholders What sort of evaluation is valued?
Policy makers Effectiveness; what works?
Funders Accountability/value for money
Planning & Performance Performance monitoring/targets
Managers Process evaluations
Researchers Knowledge building; research quality
and utility
Service quality – access, experience,
Service users
relevance to needs
6. What makes a ‘good’ evaluation?
Influences decision
making
Contributes
to the
evidence
base ifle
tu
s T a
’s
g
o
d
on
tti
fe
ne
s
mv
go
n r
u
u
f
in
d
Is obje
cti
and w ve Shows the
e
execu ll sH
sle
up
ted value of what
we’re doing twu
ann
dd
hte
as
r
cid
ef
ee
wne
r
ika
nr
ge
m
a
7. Before you start an evaluation
• Clarify the aims of your project
• Think about who you are targeting
• Think about how your project will effect change
• Identify what you want to achieve in the short, intermediate and long-
term
• Decide how you will measure what you want to achieve [if possible
used well validated measures – don’t attempt to make up your own]
• Think about what information you will need to collect [NEVER collect
information you don’t intend to use!]
8. Types of evaluation
There are 4 broad types of evaluation:
•process (which deals broadly with the processes
involved in service delivery)
•outcome (which determines whether aims have
been met & how effective the service is)
Often carried out at the same time or slightly
staggered
•impact (determining the wider implications of
the service, often comparing with an areas where
no service is provided)
•economic (determining whether the service is
cost effective)
9. Process evaluation
akes place during setting up and/or delivery of the project
rovides guidance to those who are responsible for ensuring
and improving the project’s quality
ocuses on identifying barriers and facilitators to successful
implementation/delivery, as well as assessing whether the key
objectives have been met
an be used to refine/modify the delivery of the project
an be used to determine the effects of the project (intended
10. Good process evaluation:
1. Is carried out by people not involved in the project (makes it more
objective and people may answer more honestly)
2. Doesn’t make assumptions about how the project works and what it
achieves – sometimes there are unintended consequences
3. Uses different types of data to assess the processes
Qualitative (talking to people) data can help identify if the project is running as
intended; meeting the needs of the participants (staff and users); the experiences
(positive and negative) of participants; the changes experienced (intended and
unintended); does the project work as it should?
Quantitative (numbers) data should be collected to demonstrate that you are
attracting the ‘right’ participants (called reach) Minimum data should include all the
variables you think are relevant. For example: age, gender, postcode, ? health
condition, ?physical activity level for all participants, performance data (e.g. how
many referrals were made, activities carried out etc)
11. Outcome Evaluation
ims to answer questions such as:
oes the project work?
oes it provide the benefits to participants you wanted it to make?
ow do you measure success? [Where possible VERY IMPORTANT to use
questionnaires etc. that have been validated and used in the same type
of setting/population]
f possible should try and measure change when people start in the
project, after a few months and after about one year.
old standard would be the randomised controlled trial where you have a
control group – you can be more certain that any benefits that are seen
are due to the project and not due to other reasons.
12. To do a good outcome evaluation
• Be realistic and clear about what your project is likely to achieve – make sure you are
measuring the right outcomes (e.g. improving mental health, not curing depression)
–ask why you think your project would impact on these outcomes
• Collect core information on everyone who comes to the project (baseline)
• Consistently ask same questions to all participants
• Use proper validated tools to measure outcomes
• Don’t collect unnecessary data (just because you think it ‘might be interesting’)
• Follow up participants at medium and longer term (e.g. 6 months & 12 months) to
see if change/benefit has occurred
• Demonstrate effectiveness by using experimental methods (not necessary for all
projects) – aspirational but not impossible!
13. Musselburgh Health Walks for sedentary people
and/or mental health problems
An evaluation by Roma Robertson, PhD student
rocess evaluation
Is it possible to run a programme of health walks 3 times/week?
Do health practitioners (HPs) refer onto the service?
Do sedentary patients/people with depression utilise the service?
Can the model be improved? What positive and
14. Process evaluation
1. Is it possible to run a programme of health walks 3 times/week?
Yes: The programme of walks ran well for the planned 24 weeks. It was
organised by CHANGES Community Health Project and supported by 8
volunteer walk leaders, many of whom had attended the well
established wellbeing walks run by CHANGES over the years. However,
without the volunteer walk leaders it would have been difficult to
provide such frequent walks and at low cost.
2. Do health practitioners refer to the service?
The initial plan was to recruit walk participants
via GP consultations.
Low recruitment numbers led to other
alternative recruitment strategies being introduced
(so No!)
15. Process evaluation
. Do sedentary patients/people utilise the service?
f the 19 who participated, 13 (68%) stayed until the end of the
12 weeks and 9 people continued to walk with the group
beyond the 12 weeks they had agreed to. This suggests that a
large proportion of participants valued the service.
. Can the walks be improved?
here was a lot of very positive feedback.
hen pressed for how the walks could be improved, walkers
indicated they would like a greater variety of walks; and a
quarter felt the walks were too short. One person found it
embarrassing to meet at the health centre and one was
worried in case someone they knew would see them on the
walks. Seven people liked going for coffee afterwards; 5
disagreed or were undecided
16. Outcomes evaluation
s there any evidence of benefits to participants?
ollected data at baseline, 6 weeks and 6 months
sed validated measures of health outcomes (IMPORTANT!)
ollected data on mental wellbeing, physical activity, social networks, general health
hysical activity
mmediately after the intervention and 6 months later more people reported that they
ook at least 20 minutes exercise on 3 days each week than before the study
ental wellbeing
cores for mental health improved after 12 weeks of walks, but the
mprovement had reduced and was no longer statistically significant
months later
17. Effects of community gardening on
health outcomes
Project by Di Blackmore, PhD student
im to investigate the effect of community gardens on health and
elated outcomes. The objectives of this research are to:
explore a range of health effects for the individuals
explore mechanisms by which the community gardening project
affects health
etermine how/if the outcomes vary between the different community
gardens and other variables such as the amount of time spent in the
garden
18. Methods
Intend to recruit participants near the start of their gardening
experience, and take baseline measures of stress level and physical
health: blood pressure, body mass index, activity level and salivary
cortisol.
In addition, participants will be asked to complete validated
questionnaires that examine aspects of mental wellbeing, physical
activity, quality adjusted life years, loneliness, community cohesion
and social capital.
This data will be collected at baseline, some measures again at 6
weeks and all measures at 12 weeks.
Also collecting qualitative data to explore participants experiences
of being involved in the projects, and how they felt they benefited
from them
19. A feasibility study of Exercise Referral Scheme:
indoor versus outdoor activities
Led by Dr Larry Doi
im
o test the feasibility, acceptability and effectiveness of randomising patients to ERS
in either indoor or outdoor activities. YES a randomised controlled trial!!
esearch questions
. What are the initial estimations of effectiveness of indoor versus outdoor activities
on a range of health outcomes? [OE]
. Are there are particular aspects of the outdoor exercise or indoor exercise,
delivered via an exercise referral scheme, that confer specific health benefits? [OE]
. Do the patients have strong preferences for setting of physical activity? [PE]
. What are the main barriers and facilitators to implementing the outdoor
intervention successfully? [PE]
. What are the underlying mechanisms of action or change (why and how the
outdoor and indoor activities have an effect on health outcomes) [PE]
20. A feasibility study of Exercise Referral Scheme:
indoor versus outdoor activities
Setting: Bathgate, West Lothian
Interventions
1) Indoor ERS (normal activities in leisure centre)
2) Outdoor ERS
Intervention to be developed but will roughly equate with duration and exercise intensify to
the indoor intervention. Components may include:
1.Green gym
2.Led walks
3.Other outdoor activities
Duration of the intervention period would be 12 weeks. Participants will be asked to only do
activities in the arm of the trial to which they have been allocated. After the 12 weeks all
participants will be able to continue with the physical activities of their choice.
All the outdoor activities (e.g. green gym, led walks) will remain in place for a minimum of
48 weeks (making the duration of both interventions a year in total). All participants will be
followed up at one year.
21. Data collected routinely by the exercise referral
team
eight loss BMI/ Abdominal Girth
hysical activity General Practice Physical Activity Questionnaire
hysical fitness Peak Flow
lood pressure Systolic BP/ Diastolic BP
ental health Hospital Anxiety and Depression Questionnaire
eneral health The General Health Questionnaire (GHQ12)
atient satisfaction Satisfaction survey
dherence Electronic records on attendance at feedback sessions
his data is recorded electronically and is collected at several time
22. Good Evaluation Resources
Evaluation Support Scotland
http://www.evaluationsupportscotland.org.uk/
BHF Exercise Referral Toolkit
http://www.bhfactive.org.uk/sites/Exercise-Referral-Toolkit/
Standard Evaluation Framework for physical activity interventions
http://www.noo.org.uk/uploads/doc/vid_16722_SEF_PA.pdf
Learning, Evaluation and Planning (LEAP)
http://www.scdc.org.uk/what/LEAP/
Stakeholder analysis - Evaluation framework (Wimbush & Watson) Policy-makers (SE) Effectiveness; outcome-oriented evaluations; what works? Strategic planners (DCE) Perfprmance management and monitoring Programme/project managers Objectives-based evaluations; developmental/formative evaluations R&E specialists Knowledge-building; research quality; research utility Service users Quality of service provision, experience/how treated, relevance to needs But this is usually ‘ messy ’ – not everyone with an interest in a program will necessary have the same purpose or questions. Stakeholder management is a key skill for evaluators
Questions and discussion What do you think makes a good evaluation? ( data producer/user person)
If taking account of and managing different stakeholders views/interests & questions in an evaluation, you can ’ t address all questions all at once SO YOU NEED TO PRIORITISE This is where the above principles ( pragmatic) come into play
If taking account of and managing different stakeholders views/interests & questions in an evaluation, you can ’ t address all questions all at once SO YOU NEED TO PRIORITISE This is where the above principles ( pragmatic) come into play