SlideShare a Scribd company logo
CEP • CEI | 1
BENCHMARKING
Foundation
Evaluation
Practices
2 3
FOR MORE INFORMATION, CONTACT
Ellie Buteau, Ph.D.
Vice President – Research
617-492-0800 ext. 213
[email protected]
Julia Coffman
Director, Center for Evaluation Innovation
202-728-0727 ext. 116
[email protected]
AUTHORS
This report was prepared by the Center for Effective
Philanthropy
(CEP) together with the Center for Evaluation Innovation (CEI).
A
number of people contributed to the development of the survey
instrument, analysis of data, and creation of this research
report.
From CEP: Ellie Buteau, Jennifer Glickman, and Charis Loh.
From
CEI: Julia Coffman and Tanya Beer.
This work is licensed under the Creative Commons BY-NC-ND
License.
© 2016. The Center for Effective Philanthropy, Inc. All rights
reserved.
CEP • CEI | 5
TABLE OF CONTENTS
Respondent Demographics 8
Foundation Demographics 11
Evaluation Practices 17
Using Evaluation Information 25
Looking Forward 32
Discussion Questions 36
Methodology 39
ABOUT THE CENTER FOR EFFECTIVE PHILANTHROPY
ACKNOWLEDGEMENTS
We are very appreciative of the support that made this work
possible.
We are grateful to Marc Holley, Kelly Hunt, Ed Pauly, Diana
Scearce,
Nadya Shmavonian, and Fay Twersky for providing feedback on
a
draft of the survey used for this research. The survey created for
this
research study drew, in part, from a survey originally created by
Patti
Patrizi and Elizabeth Heid Thompson in 2009. We are also
grateful to
Johanna Morariu for providing feedback on a draft of this
report.
The authors would like to thank CEP’s President, Phil
Buchanan, for
his contributions to this research, as well as CEP’s Art Director,
Sara
Dubois, for her design of the report.
This research is based on CEP and CEI’s independent data
analyses,
and CEP and CEI are solely responsible for its content. The
report does
not necessarily reflect the individual views of the funders,
advisers, or
others listed throughout this report.
For more information on CEP, please visit
www.effectivephilanthropy.org. For more information on CEI,
please
visit www.evaluationinnovation.org and
www.evaluationroundtable.org.
ABOUT THE CENTER FOR EVALUATION INNOVATION
MISSION
To provide data and create insight so
philanthropic funders can better define,
assess, and improve their effectiveness—
and, as a result, their intended impact.
MISSION
Our aim is to push
philanthropic and nonprofit
evaluation practice in new
directions and into new arenas.
We specialize in areas that are
challenging to assess, such as
advocacy and systems change.
CEP • CEI | 7
In 2015 and 2016, the Center for Effective Philanthropy and the
Center for Evaluation Innovation partnered for the first time to
benchmark current evaluation practices at foundations. We
wanted
to understand evaluation functions and staff roles at
foundations, the
relationship between evaluation and foundation strategy, the
level of
investment in and support of evaluation work, the specific
evaluation
activities foundations engage in, and the usefulness and use of
evaluation information once it is collected.
To explore these topics, we collected survey data from 127
individuals who were the most senior evaluation or program
staff
at their foundations (see Methodology). These individuals came
from independent and community foundations giving at least
$10
million annually, or foundations that were members of the
Evaluation
Roundtable—a network of foundation evaluation leaders who
seek
to support and improve evaluation practice in philanthropy.
The result of this effort is what we believe to be the most
comprehensive review ever undertaken of evaluation
practices at foundations.
It is our hope that the data presented in this report will help you
and
your foundation determine what evaluation systems and
practices align
best with your foundation’s strategy, culture, and ultimate
mission.
What resources should you invest in evaluation? On what
should your
evaluation efforts focus? How can you learn from and use
evaluation
information? We believe that considering these questions in
light of this
benchmarking data can allow you to more thoughtfully answer
these
questions. Ultimately, we hope the information in this report
helps you
prepare your foundation to better assess its progress toward its
goals
and its overall performance.
We hope you find this data useful.
Sincerely,
Ellie & Julia
September 2016
Julia Coffman
Director
Center for Evaluation Innovation
Ellie Buteau, Ph.D.
Vice President – Research
Center for Effective Philanthropy
Dear Colleague,
In the survey, we defined evaluation
and/or evaluation-related activities as
activities undertaken to systematically
assess and learn about the foundation’s
work, above and beyond final grant
or finance reporting, monitoring, and
standard due diligence practices.
DEFINITION OF EVALUATION
USED IN THIS STUDY
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 8 9
Respondent
Demographics
Thirty-eight percent of respondents
have had responsibility for
evaluation-related activities at the
foundation for two years or less.
<1 year
13%
1-2 years
25%
6-8 years
14%
≥ 9 years
18%
3-5 years
30%
38%
Of respondents:
CEO
report to the CEO/President
62% 23% SENIORPROGRAM STAFF
report to senior or executive
level program staff
are evaluation staff
58%
are program staff
35%PROGRAM STAFFEVALUATION STAFF
ROLE AND TENURE
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 10 11
45%
have an advanced degree in the social
sciences or applied research
37%
have received training in evaluation through
workshops or short courses
5%
have an advanced degree in evaluation
Foundation
DemographicsPREVIOUS EVALUATION TRAINING
Of respondents:
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 12 13
of foundations have a dedicated
evaluation unit or department,
separate from the program department
Of these departments:
34%
have had their name changed in
the past two years
21%
were newly created during the
past two years
19%
have their own grantmaking
and/or contracting budget
79%
of foundations do not have a dedicated
evaluation unit or department
Of respondents at these foundations:
66%
work in program departments
89%
work in operations or
administration departments
20%
work in the President’s office or
executive office
19%
EVALUATION
DEPARTMENT
PROGRAM
PRESIDENT'S
/EXECUTIVE
OFFICE
OPERATIONS/
ADMINISTRATION
ORGANIZATIONAL STRUCTURE
Larger foundations are
more likely to have a
dedicated evaluation unit
or department.1
Common department
names include:
Evaluation
Evaluation and Learning
Research and Evaluation
Research, Evaluation,
and Learning
Learning and Impact
1 A chi-square analysis was conducted between whether or not
foundations have asset sizes greater
than the median in our sample and whether or not foundations
have a dedicated evaluation
department. A statistical difference of a medium effect size was
found. A chi-square analysis was also
conducted between whether or not foundations give more than
the median annual giving amount in
our sample and whether or not those foundations have a
dedicated evaluation department. Again, a
statistical difference of a medium effect size was found.
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 14 15
About half of foundations have 1.5 full-time equivalent (FTE)
staff
or more regularly dedicated to evaluation work.
About half of foundations spend $200,000 or more
on evaluation (in U.S. dollars).3
About one-quarter of
foundations spend $40,000
or less on evaluation.
About one-quarter of
foundations spend $1 million
or more on evaluation.
Thirty-five percent are quite or extremely confident in
the dollar estimate they provided.
Staff with evaluation-related responsibilities:
$200k
$40k
35%
$1M
direct and manage
all or most
work related to
evaluation at 45%
of foundations
provide advice and
coaching to other
foundation staff who
manage all or most work
related to evaluation at
21% of foundations
hire third parties to
direct and manage all
or most work related to
evaluation on behalf of
the foundation at 14%
of foundations
T H I R D
PA RT Y
2 An independent samples t-test indicated that foundations with
asset sizes greater than the median
foundation in our sample were more likely to have a greater
number of evaluation staff. This
statistical difference was of a medium effect size.
Larger foundations tend to have more staff regularly
dedicated to evaluation work.2
For every 10 program staff members, the median foundation has
about one FTE staff member regularly dedicated to evaluation
work.
EVALUATION STAFFING
MODELS OF HOW EVALUATION
RESPONSIBILITIES ARE MANAGED
3 In the survey, we did not put parameters around what
respondents should or should not include in
the dollar value they provided. Respondents were told it is
understandable that it may be difficult to
give a precise number, but to provide their best estimate.
EVALUATION SPENDING
perceive that funding levels
for evaluation work at their
foundation have stayed about
the same relative to the size of
the program budget over the
past two years
45%
perceive that funding levels
for evaluation work at
their foundation increased
relative to the size of the
program budget over the
past two years
50%
Of respondents:
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 16 17
Half of respondents report
that most or all of their
grantmaking is proactive
(e.g., the foundation identifies
and requests proposals from
organizations or programs
that target specific issues or
are a good fit with foundation
initiatives and strategies).
About one-quarter of respondents
report that most or all of their
grantmaking is responsive (e.g.,
driven by unsolicited requests from
grant seekers).
50%
27%
PROACTIVE
RESPONSIVE
Evaluation
Practices
GRANTMAKING
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 18 19
PRIORITIZATION OF
EVALUATION ACTIVITIES
respondents spending time on the activity who say it is a top
priority
respondents spending time on the activity
Provide research or data to inform grantmaking strategy
90%
35%
Evaluate foundation initiatives or strategies
88%
51%
Develop grantmaking strategy
86%
34%
Design and/or facilitate learning processes or events
within the foundation
79%
27%
Compile and/or monitor metrics to measure
foundation performance
71%
33%
Evaluate individual grants
71%
34%
Design and/or facilitate learning processes or events with
grantees
or other external stakeholders
70%
15%
Improve grantee capacity for data collection or evaluation
69%
14%
Conduct/commission satisfaction/perception surveys (of
grantees
or other stakeholders)
7%
60%
Disseminate evaluation findings externally
9%
57%
Refine grantmaking strategy during implementation
87%
26%
Half of respondents report
spending time on at least nine
of these activities.
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 20 21
CHALLENGES
INVESTMENT IN EVALUATION
ACTIVITIES
Percentage of respondents who say
the following practices have been at
least somewhat challenging in their
foundation’s evaluation efforts4
Percentage of respondents who say
their foundation invests too little in the
following evaluation activities
Having evaluations result in meaningful insights for the
foundation
76%
Incorporating evaluation results into the way the foundation
will
approach its work in the future
70%
76%
Having evaluations result in useful lessons for grantees
82%
4 Respondents were asked to rate how challenging each of the
practices has been to their
foundation’s evaluation efforts on a 1-5 scale, where 1 = ‘Not at
all challenging,” 2 = ‘Not very
challenging,” 3 = ‘Somewhat challenging,’ 4 = ‘Quite
challenging,’ and 5 = ‘Extremely challenging.’
The percentages included above represent respondents who
rated a 3, 4, or 5 on an item.
Identifying third party evaluators that produce high quality
work
59%
Having evaluations result in useful lessons for the field
83%Disseminating evaluation findings externally
71%
Improving grantee capacity for data collection or evaluation
69%
Designing and/or facilitating learning processes or events with
grantees
or other external stakeholders
58%
Compiling and/or monitoring metrics to measure foundation
performance
55%
Designing and/or facilitating learning processes or events
within the foundation
48%
Evaluating foundation initiatives or strategies
44%
Providing research or data to inform grantmaking strategy
42%
Refining grantmaking strategy during implementation
39%
Developing grantmaking strategy
26%
Evaluating individual grants
22%
Conducting/commissioning satisfaction/perception surveys (of
grantees
or other stakeholders)
41%
Allocating sufficient monetary resources for evaluation efforts
63%
Having foundation staff and grantees agree on the goals of the
evaluation
36%
Having programmatic staff and third party evaluators agree on
the
goals of the evaluation
31%
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 22 23
MOST COMMON APPROACHES
TO FUNDING GRANTEES'
EVALUATION EFFORTS
of foundations have no common
approach to evaluating grants
because funding evaluation efforts
differs widely across the foundation’s
program or strategy areas
41%
of respondents report that
grantees can spend a portion of
their grant dollars on evaluation
if they request to do so
19%
of respondents report that
grantees receive general
operating support dollars, and
they can choose to use these
dollars for evaluation5
10%
of respondents say the
foundation commissions outside
evaluators to evaluate individual
grantees’ work
12%
5 All other response options for this item were selected by
fewer than 10 percent of respondents and
not shown here.
Percentage of individual grants funded for evaluation
Almost two-thirds of respondents say their
foundations fund evaluations for less than 10
percent of individual grants.
none less than 10% 10% to 25%
26%
to
50%
51%
to
75%
more
than
75%
63%
13%
found it quite or extremely
useful in providing evidence for
the field about what does and
does not work
found it quite or extremely
useful in future grantmaking
decisions
found it quite or extremely
useful in understanding the
impact the foundation’s grant
dollars are making
found it quite or extremely
useful in refining foundation
strategies or initiatives
Of those who have provided funding for a randomized control
trial:
63%
38%
42%
25%
RANDOMIZED
CONTROL TRIALS
About one-fifth of respondents say their foundations
have provided funding for a randomized control trial of
their grantees’ work in the past three years.
19%
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 24 25
TYPES OF EVALUATION
Over 40 percent of respondents say their
foundation has engaged in efforts to
coordinate its evaluation work with other
funders working in the same issue areas.
Yes, we are already
engaged in such efforts 28% 6% 24%
42%
Evaluation Type Grantees’ Work
Regularly Occasionally Never
Summative 20% 52% 28%
Formative 15% 53% 31%
Developmental 10% 46% 44%
Evaluation Type Foundation Initiatives or Strategies
Regularly Occasionally Never
Summative 25% 53% 22%
Formative 20% 57% 23%
Developmental 22% 36% 42%
No, but we are currently considering such
efforts
No, we considered it but concluded it was
not right for us
No, we have not considered engaging in
any such efforts
COLLABORATION
Using Evaluation
Information
Frequency with which different types of evaluations
are conducted on grantees’ work and foundation
initiatives or strategies
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 26 27
Grantee organizations it seeks to affect
46%
Fields it seeks to affect
35%
Communities it seeks to affect
22%
Ultimate beneficiaries it seeks to affect
20%
UNDERSTANDING CHALLENGES
Percentage of respondents who say
each of the following is a challenge
for program staff’s use of information
collected through, or resulting from,
evaluation work
Program staff's attitudes toward evaluation
50%
Program staff's lack of involvement in shaping the evaluations
conducted
40%
Program staff's level of comfort in interpreting/using data
71%
Program staff's time
91%
Percentage of respondents who
believe their foundation understands
quite or very accurately what it has
accomplished through its work, when
it comes to each of the following
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 28 29
Percentage of respondents who
say program staff are likely to use
information collected through, or
resulting from, evaluations to inform
the following aspects of their work
When a respondent says the foundation's senior management
engages less
than the appropriate amount in evaluation, the foundation is
significantly
more likely to experience the following evaluation challenges:
USE OF INFORMATION
Decide whether to adjust grantmaking strategies during
implementation
74%
Decide whether to renew grantees’ funding
71%
76%
Decide whether to expand into new program areas or exit
program areas
76%
Hold grantees accountable to the goals of their grants
57%
Understand what the foundation has accomplished through its
work
80%
Strengthen grantee organizations’ future performance
63%
Communicate publicly about what the foundation has learned
through
its work
56%
Decide whether to award a first grant
40%
LEVEL OF ENGAGEMENT OF
SENIOR MANAGEMENT
LESS THAN HALF of respondents say senior management
engages the appropriate
amount in supporting adequate investment in the evaluation
capacity of grantees.
1%
44%39%16%
LESS THAN HALF of respondents say senior management
engages the
appropriate amount in considering the results of evaluation
work as an
important criterion when assessing staff.
43%31%26%
ABOUT HALF of respondents say senior management engages
the appropriate
amount in modeling the use of information resulting from
evaluation work in
decision making.
52%39%9%
OVER TWO-THIRDS of respondents say senior management
engages the
appropriate amount in communicating to staff that it values the
use of
evaluation and evaluative information.
2%
68%24%6%
No engagement
Appropriate amount of engagement
Too little engagement
Too much engagement
► Allocating sufficient monetary resources for evaluation
efforts
► Incorporating evaluation results into future work
► Having evaluations result in useful lessons for the field
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 30 31
LEVEL OF SUPPORT FROM BOARD
ALMOST 40 PERCENT of respondents say there is a high level
of board support
for the use of evaluation or evaluative data in board-level
decision making.
1%
38%39%22%
ABOUT HALF of respondents say there is a high level of board
support for the use
of evaluation or evaluative data in decision making by staff at
the foundation.
2%
52%40%6%
FORTY PERCENT of respondents say there is a high level of
board support for
the role of evaluation staff at the foundation.
40%39%14%7%
No support
Moderate support
Little support
High support
ONLY ONE-THIRD of respondents say there is a high level of
board support for
foundation spending on evaluation.
3% 34%44%19%
When a foundation's board is less supportive of evaluation, the
foundation is
significantly more likely to experience the following evaluation
challenges:
► Allocating sufficient monetary resources for evaluation
efforts
► Having evaluations result in meaningful insights
► Incorporating evaluation results into future work
► Having foundation staff and grantees agree on evaluation
goals
► Having evaluations result in useful lessons for grantees
► Having evaluations result in useful lessons for the field
Percentage of respondents who say
evaluation findings are shared with the
following audiences quite a bit or a lot
SHARING INFORMATION
Foundation’s grantees
28%
Foundation’s CEO
77%
Other foundations
17%
Foundation’s board
47%
Foundation’s staff
66%
General public
14%
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 32 33
Looking
Forward Foundations will be more strategic
in the way they plan for and design
evaluations so that information
collected is meaningful and useful.
THE TOP THREE CHANGES EVALUATION
STAFF HOPE TO SEE IN FIVE YEARS⁶
Implement
more strategic
evaluation
designs to
measure
initiatives and
key areas of
investment.
Develop clear strategies
and goals for what [the
foundation] hopes to
measure and assess.
My sole wish is that evaluation
data is meaningful−that it is
actually linked to strategy.
1
6 Of evaluation staff who responded to our survey, 74 percent,
or 94 of 127 respondents, answered
the open-ended question, “In five years, what do you hope will
have changed for foundations in the
collection and/or use of evaluation data or information?”
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 34 35
2
Use evaluation deliverables to
inform decisions that improve our
foundation and grantee performance.
More and more effective
use of evaluative data
and information for the
purpose of learning
and improvement for
foundations.
I would like to see the full integration
of evaluation into foundation daily
practice and routine decision making.
Foundations will use evaluation
data for decision-making and
improving practice.
More public
sharing both
internally and
externally and
more frank
conversation
about what
worked or
didn’t work.
I want to expand
our ability to share
information to inform
the fields in which we
work and to inform our
audiences, such as donors
and policymakers.
To improve the level of transparency
surrounding evaluation, less emphasis on
perfection and more on discovery.
3
Foundations will be more
transparent about their
evaluations and share what they
are learning externally.
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 36 37
1. What is the purpose of evaluation at your foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
How do your foundation’s evaluation efforts align with its goals
and strategies, if at all?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
How does leadership at your foundation use information from
the foundation’s
evaluation work, if at all?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
How do your foundation’s evaluation efforts align, or not align,
with its organizational culture?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
2. How does your foundation make decisions about each of the
following:
How much to budget for evaluation work?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
DISCUSSION QUESTIONS
Which costs will be categorized as evaluation costs (e.g.,
salaries of staff with
evaluation responsibilities, third party evaluators, data
collection efforts, etc.)?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
3. How are responsibilities for evaluation work structured at
your
foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
How many staff have evaluation-related responsibilities at your
foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
What are the evaluation-related job responsibilities of these
staff members? On what do
they spend their time?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
In which department or area do staff with evaluation-related
responsibilities work, and why?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 38 39
METHODOLOGY
SAMPLE
Foundations were considered for inclusion in this sample if
they:
• were based in the United States or Canada;
• were an independent foundation, including health conversion
foundations,
or community foundation as categorized by Foundation
Directory Online and
CEP’s internal contact management software;
• provided $10 million or more in annual giving, according to
information
provided to CEP from Foundation Center in September 2014 and
the Canada
Revenue Agency, with help from Philanthropic Foundations
Canada;
• or, were members of the Center for Evaluation Innovation’s
(CEI) Evaluation
Roundtable.
For foundations that were members of CEI’s Evaluation
Roundtable, the
foundation’s representative to the Roundtable was included in
the sample. For
all other foundations, the following criteria were used to
determine the most
senior person at the foundation who was most likely to have
evaluation-related
responsibilities:
An individual was deemed to be evaluation staff if his/her title
included one or
more of the following words, according to the foundation’s
website:
1. Evaluation 2. Assessment 3. Research
4. Measurement 5. Effectiveness 6. Knowledge
7. Learning 8. Impact 9. Strategy
10. Planning 11. Performance 12. Analysis
To determine which evaluation staff member at a foundation
was the most
senior, the following role hierarchy was used:
1. Senior Vice President 2. Vice President 3. Director
4. Deputy Director 5. Senior Manager 6. Manager
7. Senior Officer 8. Officer 9. Associate
If no staff on a foundation’s website had titles or roles that
included the above
words related to evaluation, the most senior program staff
member at the
foundation was chosen for inclusion in the sample. Program
staff were identified
as having titles that included the words “Program” or “Grant,”
or mentioned
a specific program area (e.g., “Education” or “Environment”).
The same role
hierarchy described above was used to determine seniority.
4. How, if at all, does your foundation use information from its
evaluation
work to inform programmatic decisions?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
5. How are decisions made about with whom evaluation
information will
be shared:
Inside the foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
Outside of the foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
6. What changes would you like to see regarding evaluation at
your
foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
What would you hope would happen as a result of these
changes?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 40 41
independent foundations, 13 percent were health conversion
foundations.
The final eight percent of foundations in our sample included
other types of
funders that were part of the Evaluation Roundtable, aside from
independent or
community foundations.
The median asset size for foundations in the sample was about
$530 million
and the median annual giving level was about $28 million. The
median number
of full-time equivalent staff working at foundations in this
study was 25. The
number of full-time equivalent staff is based on information
purchased from
Foundation Center in September 2014.
QUANTITATIVE ANALYSIS
To analyze the quantitative survey data from foundation leaders,
descriptive
statistics were examined. Chi-square analyses and independent
samples t-tests
were also conducted to examine the relationship between
foundation size and
evaluation structure. An alpha level of 0.05 was used to
determine statistical
significance for all testing, and effect sizes were examined for
all analyses.
Because our sample only consisted of 32 community
foundations, we were
unable to rigorously explore statistical differences between
independent and
community foundations in this study.
QUALITATIVE ANALYSIS
Thematic and content analyses were conducted on the responses
to the
open-ended question, “In five years, what do you hope will have
changed for
foundations in the collection and/or use of evaluation data or
information?” A
coding scheme was developed for this item by reading through
all responses
to recognize recurring ideas, creating categories, and then
coding each
respondent’s ideas according to the categories.
A codebook was created to ensure that different coders would
be coding for the
same concepts rather than their individual interpretations of the
concepts. One
coder coded all responses to the question and a second coder
coded 15 percent
of those responses. At least an 80 percent level of inter-rater
agreement was
achieved for each code.
Selected quotations were included in this publication. These
quotations were
selected to be representative of the themes seen in the data.
Only those individuals who had an e-mail address that could be
accessed through
the foundation’s website, CEP staff knowledge, or CEI staff
knowledge were
deemed eligible to receive the survey.
In September 2015, 271 foundation staff were initially sent an
invitation to
complete the survey. Two new members of the Evaluation
Roundtable were later
added to the sample and sent the survey. Later, 19 individuals
were removed from
the sample because they did not meet the inclusion criteria.
Completed surveys
were received from 120 staff members, and partially completed
surveys, defined as
being at least 50 percent complete, were received from seven
staff members.
Thus, our final sample of respondents included 127 of the 254
potential
respondents, for a response rate of 50 percent. Of the
foundation staff who
responded to the survey, 58 percent were evaluation staff, 35
percent were
program staff, and six percent were staff with a title that did not
fall into either of
these two categories, based on our previously defined criteria.
METHOD
The survey was fielded online during a four week period from
September to
October of 2015. Foundation staff with evaluation-related
responsibilities
were sent a brief e-mail including a description of the purpose
of the survey, a
statement of confidentiality, and a link to the survey. These
staff were sent up to
nine reminder e-mails and received up to one reminder phone
call.
The survey consisted of 43 items, some of which contained
several sub-items.
Respondents were asked about a variety of topics, including
their role at
their foundation and previous experience, their foundation and
its evaluation
function, their foundation’s specific evaluation practices, and
the ways in which
information collected through evaluations is used.
RESPONSE BIAS
Foundations with staff who responded to this survey did not
differ from non-
respondent organizations by annual asset size, annual giving
amount, region
of the United States in which the foundation is located, or
whether or not the
foundation is an independent foundation. Information on assets
and giving
was purchased from Foundation Center in September 2014.
Evaluation staff
of foundations that are part of CEI’s Evaluation Roundtable
were more likely to
respond to the survey than evaluation staff of foundations that
are not part of
CEI’s Evaluation Roundtable.
SAMPLE DEMOGRAPHICS
Sixty-seven percent of the foundations represented in our final
sample were
independent foundations and 25 percent were community
foundations. Of the
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES42
www.effectivephilanthropy.org
www.evaluationinnovation.org
CEP FUNDERS
We are very appreciative of the support that made this work
possible.
See below for a list of funders.
The Leona M. and Harry B. Helmsley
Charitable Trust
The McKnight Foundation
New Hampshire Charitable Foundation
New York State Health Foundation
Oak Foundation
Public Welfare Foundation
Richard M. Fairbanks Foundation
Saint Luke’s Foundation
Sobrato Family Foundation
Teagle Foundation
Weingart Foundation
Wilburforce Foundation
William Penn Foundation
INDIVIDUAL CONTRIBUTORS
Michael Bailin
Kevin Bolduc
Phil Buchanan
Alyse d’Amico
John Davidson
Robert Eckardt
Phil Giudice
Tiffany Cooper Gueye
Crystal Hayling
Paul Heggarty
Bob Hughes
Barbara Kibbe
Latia King
Patricia Kozu
Kathryn E. Merchant
Grace Nicolette
Richard Ober
Alex Ocasio
Grant Oliphant
Hilary Pennington
Christy Pichel
Nadya K. Shmavonian
Fay Twersky
Jen Vorse Wilka
Lynn Perry Wooten
$500,000 OR MORE
Fund for Shared Insight
Robert Wood Johnson Foundation
The Rockefeller Foundation
The William and Flora Hewlett Foundation
$200,000 TO $499,999
The David and Lucile Packard Foundation
Ford Foundation
W.K. Kellogg Foundation
$100,000 TO $199,999
Barr Foundation
The James Irvine Foundation
The Kresge Foundation
Rockefeller Brothers Fund
S.D. Bechtel, Jr. Foundation
$50,000 TO $99,999
Gordon and Betty Moore Foundation
The Wallace Foundation
$20,000 TO $49,999
Carnegie Corporation of New York
Charles Stewart Mott Foundation
The Duke Endowment
John D. and Catherine T. MacArthur Foundation
Lumina Foundation
Surdna Foundation
W. Clement and Jessie V. Stone Foundation
UP TO $19,999
The Assisi Foundation of Memphis
California HealthCare Foundation
The Colorado Health Foundation
The Columbus Foundation
The Commonwealth Fund
Doris Duke Charitable Foundation
Evelyn and Walter Haas, Jr. Fund
The Heinz Endowments
Henry Luce Foundation
Houston Endowment
Kansas Health Foundation
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES44
675 Massachusetts Avenue
7th Floor
Cambridge, MA 02139
(617) 492-0800
131 Steuart Street
#501
San Francisco, CA 94105
(415) 391-3070
1625 K Street, NW
Suite 1050
Washington, DC 20006
(202) 728-0727 ext.116
www.effectivephilanthropy.org
www.evaluationinnovation.org
A RESEARCH STUDY BY:
STATE OF
EVALUATION
2016
Evaluation Practice and Capacity
in the Nonprofit Sector
#SOE2016
We appreciate the support of the David and Lucile Packard
Foundation, the Robert Wood
Johnson Foundation, and the Bruner Foundation for funding this
research. The findings and
conclusions presented in this report are those of the authors
alone and do not necessarily
reflect the opinions of our funders. This project was informed
by seven advisors: Diana
Scearce, Ehren Reed, Lester Baxter, Lori Nascimento, Kelci
Price, Michelle M. Doty, and J
McCray. Thank you for your time and suggestions. The authors
acknowledge Innovation
Network staff Natalia Goldin, Elidé Flores-Medel, and Laura
Lehman for their contributions
to this report and Stephanie Darby for the many ways she makes
our work possible, and
consultant Erin Keely O’Brien for her indispensible survey
design advice. We also thank
former Innovation Network staff Katherine Haugh, Yuqi Wang,
and Smriti Bajracharya for
their contributions to project design, and our evaluation
colleagues and former State of
Evaluation authors Ehren Reed and Ann K. Emery. And finally,
we recognize the 1,125 staff
of nonprofit organizations throughout the United States who
took the time to thoughtfully
complete the State of Evaluation 2016 survey.
stateofevaluation.org
We’re thrilled that you are interested in the state of nonprofit
evaluation. We’re more than passionate
about the topic—strengthening nonprofit evaluation capacity
and practice is the driving force that
animates our work, research, and organization mission:
For more than 20 years, Innovation Network staff have worked
with nonprofits and funders to design,
conduct, and build capacity for evaluation. In 2010 we kicked
off the State of Evaluation project
and reports from 2010 and 2012 build a knowledge base about
nonprofit evaluation: how many
nonprofits were doing evaluation, what they were doing, and
how they were doing it. The State of
Evaluation project is the first nationwide project that
systematically and repeatedly collects data from
nonprofits about their evaluation practices.
In 2016 we received survey responses from 1,125 US-based
501(c)3 organizations. State of Evaluation
2016 contains exciting updates, three special topics (evaluation
storage systems and software, big
data, and pay for performance), and illuminating findings for
everyone who works with nonprofit
organizations.
As evaluators, we’re encouraged that 92% of nonprofit
organizations engaged in evaluation in
the past year—an all-time high since we began the survey in
2010. And evaluation capacity held
steady since 2012: both then and now 28% of nonprofit
organizations exhibit a promising mix of
characteristics and practices that makes them more likely to
meaningfully engage in and benefit from
evaluation. Funding for evaluation is also moving in the right
direction, with more nonprofits receiving
funding for evaluation from at least one source than in the past
(in 2016, 92% of organizations that
engaged in evaluation received funding from at least one
source, compared to 66% in 2012).
But for all those advances, some persistent challenges remain.
In most organizations, pivotal resources
of money and staff time for evaluation are severely limited or
nonexistent. In 2016, only 12% of
nonprofit organizations spent 5% or more of their organization
budgets on evaluation, and only
8% of organizations had evaluation staff (internal or external),
representing sizeable decreases.
Join us in being part of the solution: support increases of
resources for evaluation (both time and
money), plan and embed evaluation alongside programs and
initiatives, and prioritize data in
decision making. We’re more committed than ever to building
nonprofit evaluation capacity, and we
hope you are too.
Measure results. Make informed decisions. Create lasting
change.
HELLO!
Innovation Network is a nonprofit evaluation, research, and
consulting firm. We
provide knowledge and expertise to help nonprofits and funders
learn from their
work to improve their results.
Johanna Morariu Veena Pankaj
Kat Athanasiades Deborah Grodzicki
1
#SOE2016
State of evaluation 2016
Evaluation is the tool that enables nonprofit organizations to
define success and measure results in order to
create lasting change. Across the social sector, evaluation takes
many forms. Some nonprofit organizations
conduct randomized control trial studies, and other nonprofits
use qualitative designs such as case studies.
Since this project launched in 2010, we have seen a slow but
positive trend of an increasing percentage
of nonprofit organizations engaging in evaluation. In 2016, 92%
of nonprofit organizations engaged in
evaluation.
Promising Capacity
Evaluation capacity is the extent to which an organization has
the culture, expertise, and resources to continually
engage in evaluation (i.e., evaluation as a continuous lifecycle
of assessment, learning, and improvement).
Criteria for adequate evaluation capacity may differ across
organizations. We suggest that promising
evaluation capacity is comprised of some internal evaluation
capacity, the existence of some foundational
evaluation tools, and a practice of at least annually engaging in
evaluation. Applying these criteria, 28% of
nonprofit organizations exhibit promising evaluation capacity
(exactly the same percentage as in 2012).
2016
2012
2010
92%
90%
85%
28%
of nonprofit
organizations
have
promising
evaluation
capacity
evaluated their work [n=1,125]
evaluated their work [n=535]
evaluated their work [n=1,043]
74%
8%
62%
83%
92%
of those with logic models or
similar documents updated
them within the past year
of those had a logic model or similar document
of those had medium or high internal capacitydid not
evaluate
their
work or
did not
know
if their
organi-
zation
eval-
uated its
work
evaluated their work
2
stateofevaluation.org
The nonprofit evaluation report card is a round-up of key
indicators about resources, behaviors, and
culture. Comparing 2016 to 2012, what’s gotten better, worse,
or stayed the same?
Nonprofit Evaluation Report Card
92%
92%
85%
28%
58%
12%
8%
of nonprofit organizations engaged in evaluation, compared to
90%
of nonprofits in 2012
of nonprofit organizations received funding for evaluation from
at least one source, compared to 66% of nonprofits in 2012
of nonprofit organizations agree that they need evaluation
to know that their approach is working, compared to 68% of
nonprofits in 2012
of nonprofit organizations exhibited promising evaluation
capacities, compared to 28% of nonprofits in 2012
of nonprofit organizations had a logic model or theory of
change, compared to 60% of nonprofits in 2012
of nonprofit organizations spent 5% or more of their budgets on
evaluation, compared to 27% of nonprofits in 2012
of nonprofit organizations have evaluation staff primarily
responsible
for evaluation, compared to 25% of nonprofits in 2012
3
#SOE2016
Purpose & Resourcing
Nonprofit organizations conduct evaluation for a variety of
reasons. Evaluation can help nonprofits better
understand the work they are doing and inform strategic
adjustments to programming. In 2016, nonprofits
continue to prioritize evaluation for the purpose of
strengthening future work (95% of respondents identified
it as a priority), learning whether objectives were achieved
(94%), and learning about outcomes (91%). [n=877]
Funding for Evaluation
In 2016, 92% of organizations identified at least one source of
funding for evaluation and 68% of organizations
that received funding identified foundations and philanthropy as
the top funding source for evaluation.
[n=775]
Priorities for Conducting Evaluation
Strengthen
future work
Learn about
outcomes
Contribute
to
knowledge
in the field
Learn whether
original
objectives were
achieved
Learn
from
implementation
Respond
to funder
demands
Strengthen
public
policy
25%
70%
61%
55%
43%
26%
18%
56%
33% 35%
31% 42%
Moderate
priority
39%
30%
68% 65%
51% 48%
34%
Foundation or
Philanthropic
Contributions
Individual
Donor
Contributions
Corporate
Charitable
Conbritutions
Government
Grants or
Contracts
Dues, Fees,
or Other
Direct
Charges
Essential
priority
Organizations
funded by
philanthropy
were more
likely to
measure
outcomes
4
stateofevaluation.org
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
2010 2012
2016
A majority of organizations spend less than the recommended
amount on evaluation. Based on our experience
working in the social sector, nonprofit organizations should be
allocating 5% to 10% of organization budgets
to evaluation. In 2016, 84% of the organizations spent less than
the recommended amount on evaluation.
[n=740]
Organizations spent less on evaluation than in prior years. The
gap between how much an organization should
spend on evaluation and what they actually spend has increased.
[n=775]
Budgeting for Evaluation
Percent of Annual Budget Spent on Evaluation
0%
2%
Less than
2-5%
5-10%
10%or more 4% of organizations
8% of organizations
25% of organizations
43% of organizations
16% of organizations
84%
of organizations
spend less than
5% on evaluation
The percentage of organizations that spend
less than 5% of the organization budget on
evaluation has increased from 76% in 2010 to
84% in 2016.
The percentage of organizations that spend
no money on evaluation has increased from
12% in 2010 to 16% in 2016.
The percentage of organizations that spend
5% or more of the organization budget has
declined from 23% in 2010 to 12% in 2016.
84%
73%76%
12%
27%23%
16%
7%
12%
5
#SOE2016
EVALUATION DESIGN & PRACTICE
Across the sector, nonprofits are more likely to use quantitative
data collection methods. Large organizations
(organizations with annual budgets of $5 million or more)
continue to engage in a higher percentage of
both quantitative and qualitative data collection methods than
small organizations (annual budgets of less
than $500,000) and medium-sized organizations (annual budgets
of $500,000 to $4.99 million).
There are a variety of evaluation approaches available to
nonprofit organizations. In 2016, over 90% of
organizations have selected approaches to understand the
difference they are making through their work
(outcomes evaluation), how well they are implementing their
programs (process/implementation evaluation),
and to assess overall performance accountability (performance
measurement).
Common Evaluation Designs
91% 86% 85%
77% 66% 53%
Outcomes
Evaluation
Performance
Measurement
Process/
Implementation
Evaluation
Formative
Evaluation
Impact
Evaluation
Return on
Investment
Surveys
Client/
Participant
Tracking
Forms
Social Media
Statistics
and/or Web
Analytics
Focus Groups
Interviews
Observations
88% 35
%
79% 63
%
63% 61
%
94% 50
%
80% 72
%
72% 71
%
78% 21
%
66% 60
%
57% 54
%
QUANTITATIVE PRACTICES QUALITATIVE PRACTICES
Small
organizations
Small
organizations
Medium
organizations
Medium
organizations
Large
organizations
Large
organizations
Data Collection Methods
For this analysis, n values for small organizations ranged from
235 to 241, n values for medium organizations ranged from 370
to 380, and n values
for large organizations ranged from 110 to 112.
6
stateofevaluation.org
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
2010 2012
2016
Logic models and theories of change are common tools used by
organizations to guide evaluation design
and practice. They demonstrate the connections between the
work being done by an organization and the
types of changes programs are designed to achieve. Having a
logic model or theory of change is seen as a
positive organizational attribute and demonstrates an
organization’s ability to plan for an evaluation.
Since 2010, more large organizations have created or revised
their logic model/theory of change at least
annually, in comparison to small organizations.
Evaluation Planning
In 2016, 56% of large organizations
created or revised their logic model
or theory of change within the past
year (up from 45% in 2012, and on
par with 2010). [2016: n=112]
In 2016, 35% of small organizations
created or revised their logic model
or theory of change within the past
year (up from 30% in 2012, and on
par with 2010). [2016: n=242]
56%
34% 30% 35
%
56%
45%
58%
44%
All organizations
[n=812]
Organizations
that have a logic
model/theory of
change
Organizations
that have
created or
revised a logic
model/theory
of change in the
past year
Organizations that have a
logic model/theory of change
are more likely to:
• agree that evaluation is
needed to know that an
approach is working
• have higher organizational
budgets
• have more supports for
evaluation
• have foundation or
philanthropic charitable
contributions support
their evaluation
7
#SOE2016
Staffing
Evaluation staffing varies dramatically among nonprofit
organizations. One in five large organizations and
almost one in ten medium organizations have evaluation staff.
Not surprisingly, small organizations are
very unlikely to have evaluation staff. While this difference in
staff resources is intuitive, organizations without
evaluation staff are still frequently expected to collect data,
measure their results, and report data to internal
and external audiences. Furthermore, organizations with
evaluation staff were found to be more likely to
exhibit the mix of characteristics and behaviors of promising
evaluation capacity.
Who is primarily responsible for conducting evaluation work
within an organization varies dramatically. Of
organizations that evaluate, most nonprofit organizations (63%)
report that the executive leadership or
program staff were primarily responsible for conducting
evaluations. Only 6% of nonprofit organizations
report having internal evaluation staff, and 2% indicate that
evaluation work was led by an external evaluator.
[n=786]
Board of
Directors
Executive
Leadership
Program Administrative
Only 6% of
organizations have
internal evaluators
Evaluation Fundraising &
Development
Financial
4%
34%
29%
8%
6% 5%
1%
63% of organizations have executives and
program staff primarily responsible for evaluation
Less than1%of organizations do not have someone responsible
for conducting evaluations
Large and medium
organizations were significantly
more likely to have evaluation
staff (20% and 8%, respectively),
compared to small
organizations (2%), p < .001.
Significantly more organizations
with evaluation staff reported
having promising evaluation
capacity (60%), compared to
organizations without evaluation
staff (36%), p < .001.
ORGANIZATION SIZE ORGANIZATION CAPACITY
Organization Size, Staffing, and Capacity
8
stateofevaluation.org
Looking beyond who is primarily responsible for conducting
evaluation work, a larger proportion of nonprofits
work with external evaluators in some way: in 2016, 27% of
nonprofit organizations worked with an external
evaluator. Similar to 2010 and 2012, representatives of
nonprofit organizations that worked with an external
evaluator were strongly positive about their experience. [n=208]
Organization size was associated with the likelihood of working
with an external evaluator. Almost half of
large organizations worked with an external evaluator (49%)
compared to 29% of medium organizations
and 14% of small organizations.
External Evaluators
Funding source was also associated with
likelihood of working with an external
evaluator. Organizations with philanthropy
as a funding source were significantly
more likely to have worked with external
evaluators (30%) compared to organizations
that did not have philanthropy as a funding
source (20%), p=.004.
Satisfaction with working
with external evaluators
does not differ by
organization size
49%
29
%
14 %
Small
organizations
[n=243]
Medium
organizations
[n=381]
Large
organizations
[n=110]
77% Evaluator
completed a high
quality evaluation
77% Working with an
evaluator improved our
work
76% Evaluator was a
good use of our
resources
74% Would hire an
evaluator for future
projects
9
#SOE2016
AUDIENCE & USE
In 2016, 85% of nonprofit organizations reported that executive
staff (CEO or ED) were the primary audience
for evaluation, a 10% increase from 2012. Over three-quarters
of nonprofit organizations also identified the
Board of Directors as a primary audience. [n=842]
Similar to previous years, most nonprofit organizations use
evaluation to measure outcomes, answering the
question of What difference did we make? Almost as many
nonprofit organizations focus on measuring
quality and quantity, answering the questions of How well did
we do? and How much did we do? This finding
highlights the importance nonprofit organizations place on
measuring quality and quantity in addition to
outcomes. [n=953]
% NOT AN AUDIENCE % PRIMARY AUDIENCE %
SECONDARY AUDIENCE
3% 85%
76%
70%
40%
29%
23%
8% 52%
37%
37%
46%
25%
21%
11%
3%
4%
13%
32%
37%
38%
Executive Staff
(CEO/ED)
Funders
Board of
Directors
Non-executive
Staff
Clients
Policymakers
Peer
Organizations
How well did we do?
What difference did we
make?
How much did we do?
90.6%
91.3%
89.6%
Evaluation Focus
10
stateofevaluation.org
Organizations use evaluation results for internal and external
purposes, with 94% of nonprofit organizations
using results to report to the Board of Directors (internal) and
93% of nonprofit organizations using results
to report to funders (external). [n=869]
Nonprofit organizations were asked to indicate the extent to
which they use evaluation results to support new
initiatives, compare organization performance to a specific goal
or benchmark, and/or assess organization
performance without comparing results to a specific goal or
benchmark. Over 85% of organizations regularly
use evaluation results to support the development of new
initiatives, demonstrating a growing number of
organizations using evaluation to inform and develop their
work. [n=851]
52%
91%
57%
94%
82%
93%
86%
73%
Plan/Revise
Program
Initiatives
Advocate
for a Cause
Share
Findings with
Peers
Reporting
to Board of
Directors
Report to
Stakeholders
Make
Allocation
Decisions
Report to
Funders
Plan/Revise
General
Strategies
Support development of new
Initiatives
Compare to specific goal or
benchmark
Assess performance without
comparing to goal or
benchmark
44% 51% 43%
42% 33% 38%
often often often
sometimes sometimes sometimes
Evaluation Use
11
#SOE2016
Barriers & Supports
Time, staff, money, knowledge—these and other factors can
heavily influence whether organizations conduct
evaluation. Support from organizational leadership, sufficient
staff knowledge, skills, and/or tools, and an
organization culture in support of evaluation were far and away
the most helpful organizational supports for
evaluation.
Limited staff time, insufficient financial resources, and limited
staff expertise in evaluation continue to be the
top three barriers organizations face in evaluating their work.
These are the same top three challenges named
in State of Evaluation 2012 and 2010. Having staff who did not
believe in the importance of evaluation has
become a much greater barrier than in the past: more than twice
the percentage of organizations now report
this as a challenge to carrying out evaluation (2012: 12%).
BARRIERS
[n=830]
SUPPORTS
[n=777]
Insufficient support
from organization
leadership
Limited staff
knowledge, skills,
and/or tools
Having staff who
did not believe in
the importance of
evaluation
Limited staff time
Disagreements
related to data
between your
organization and a
funder
Insufficient financial
resources
Not knowing where
or how to find an
external evaluator
Stakeholders being
resistant to data
collection
13% 77%
69%
67%
50%
36%
34%
26%
48%
25%
79%
8%
52%
12%
21%
Support from
organization
leadership
Sufficient staff
knowledge,
skills, and/or
tools
Organization
culture in
support of
evaluation
Staff
dedicated to
evaluation
tasks
Funder
support for
evaluation
Sufficient
funding
Working with
an external
evaluator
12
stateofevaluation.org
Most organizations view evaluation favorably. It is a tool that
gives organizations the confidence to
say whether their approach works. However, a little under half
feel too pressured to measure their
results. [n=901]
Of organizations that evaluate their work, 83% talk with their
funders about evaluation. This dialogue
with funders is generally deemed useful by nonprofits. [n values
range from 661 to 800]
Communication and Reporting
Our organization needs
evaluation to know that our
approach is working
Data collection interfered with
our relationships with clients
Most data that is collected in
our organization is not used
There is too much pressure on
our organization to measure
our results
85%
73%
69%
57%
AGREE
DISAGREE
DISAGREE
DISAGREE
of funders are
accepting of failure
as an opportunity for
learning
of nonprofits find
discussing findings
with funders useful
of nonprofits’ funders
are open to including
evaluation results in
decisionmaking
49%
32% 15%
44% 42% 54%
often
never ignore
often
sometimes sometimes sometimes
93% 74% 69%
13
#SOE2016
Big Data
Data Trends
Big data is a collection of data, or a dataset, that is very large.
A classic example of big data is US Census data.
Using US Census data, staff of a nonprofit organization that
provided employment services could learn more
about employment trends in their community, or monitor
changes in employment over time. The ability to
work with big data is important, as it enables nonprofits to learn
about trends and better target services and
other interventions. Thirty-eight percent of organizations are
currently using big data, and an additional 9%
of organizations report having the ability to use big data but are
not currently doing so. [n=820]
Big Data: What It Takes
Organizations that used big data had more supports for
evaluation and larger budgets.
9%
38%41
%
of nonprofits do not have the ability to use
big data
of nonprofits are
currently using big
data
of nonprofits have the
ability to use big data
but are not currently
using it
Organizations that use big data had more
supports for evaluation (4.06 supports)
than those that did not use big data (3.23
supports), p<.001.
Organizations that use big data had
larger budgets (budgets of approximately
$2-2.49 million, mean=5.56) than those
that did not use big data (budgets
of approximately $1.5-1.99 million,
mean=4.13), p=.003.
mean=3.23
Do not
use big data
Do not
use big data
mean=4.06
Use big data Use big data
mean=4.13
mean=5.56
$2.0M -
2.49M avg
budget $1.5-1.99M
avg
budget
14
stateofevaluation.org
Pay for performance (PFP) is when a nonprofit is compensated
based on a certain outcome being achieved.
Pay for performance funding requires a nonprofit to track
information over time about participant outcomes.
Often this tracking will need to continue after the program has
concluded. For example, in a job training
program, participant data would likely need to be tracked for
months or years after program participation
to be able to report on outcomes such as employment or
continued reemployment. Pay for performance
arrangements have grown in popularity as a way to reward
service providers for affecting change (compared
to just providing services). Currently, 13% of organizations
receive pay for performance funding, and an
additional 35% of organizations report having the ability to
meet pay for performance data requirements
(but are not currently receiving pay for performance funding).
[n=833]
Pay for Performance: What It Takes
35%
13%
48%
of nonprofits
have the ability
or currently
receive funding
through pay for
performance
arrangements
of nonprofits receive PFP funding
of nonprofits have the ability but don’t receive
PFP funding
Organizations that received pay for performance funding had
more supports for evaluation and larger
budgets.
43% of nonprofits do not have the ability to collect data
sufficient to participate in PFP funding
Organizations that receive pay for
performance (PFP) funding had more
supports for evaluation (4.26 supports)
than those that did not receive PFP funding
(3.50 supports), p<.001.
Organizations that receive pay for performance
(PFP) funding had larger budgets (budgets of
approximately $3.5-3.99 million, mean=8.34)
than those that did not receive PFP funding
(budgets of approximately $1.5-1.99 million,
mean=4.14), p<.001.
mean=3.50
Do not receive
PFP
Do not receive
PFP
mean=4.26
Receive PFP Receive PFP
mean=4.14
mean=8.34
$3.5M –
3.99M
avg budget
$1.5M –
1.99M
avg
budget
Pay for Performance
15
#SOE2016
Data storage systems & Software
Nonprofits store their evaluation data in many different ways,
both low- and high-tech. Overall, 92% of
nonprofit organizations that engaged in evaluation in 2016
stored evaluation data in some type of storage
system. [n=794]
In most organizations, data that can be used for evaluation will
come from multiple places. Top data storage
methods include spreadsheet software (92%), word processing
software (82%), and paper copies/hard copies
(82%). Less frequently used methods include survey software
(60%) and database solutions (58%). [n=727]
55% of nonprofit
organizations are using 4 or
more data storage methods.
Multiple methods likely
represent multiple data
collection methods and data
types, which could indicate
a well-rounded evaluation
approach. Having multiple
data storage methods makes
data analysis and tracking
more challenging.
The vast majority (85%) of
nonprofit organizations are using 3 or
more data storage methods.
of nonprofit organizations store their evaluation data92%
92%
3% 11
%
30% 34% 21%
82% 60% 58%82%
Spreadsheet
software
1 method
Word
processing
software
2 methods
Paper
copies or
hard copies
3 methods
Survey
software
4 methods
Database
5 methods
16
stateofevaluation.org
Custom database solutions are the frontrunner when it comes to
satisfying the needs of nonprofits. These
databases may better meet the data storage needs of nonprofits,
but also require a level of technical
proficiency that may be out of reach for some. On average,
nonprofits are satisfied almost half the time.
There is room for improvement in matching data storage
systems to the needs and resources of nonprofits.
Over half of nonprofit organizations (58%) that evaluated in
2016 used databases to store their evaluation data.
Databases were not the most common method, but were still
relatively commonplace, which is encouraging.
Often, there is a higher barrier to entry to working with
databases as they require set-up, some expertise, and
maintenance.
Databases
Satisfaction with Data Storage Systems
of organizations that store evaluation data use a custom
database, of
these 395 organizations:
58%
DISSATISFIED SATISFIED
18% 47%
55%
47%
45%
45%
44%
16%
18%
19%
18%
19%
AVERAGE
Custom database
solution
Survey software
Spreadsheet
software
Word processing
software
Paper copies or
hard copies
14% 5% 4%
Use
Salesforce
Use CTK Use ETO
17
#SOE2016
Evaluation in the social sector
Across the social sector, funders work hand in hand with
grantee partners to provide services, implement
initiatives, track progress, and make an impact. Evaluation in
the nonprofit sector has a symbiotic relationship
with evaluation in the philanthropic sector. Decisions made by
funders affect which nonprofits engage in
evaluation, how evaluation is staffed, how evaluation is funded,
which types of evaluation are conducted, and
other factors.
Evaluation is more likely to be a staffed responsibility in
funding institutions compared with nonprofits.
For both funding and nonprofit organizations, reporting to the
Board of Directors is the most prevalent use
of evaluation data (94% of nonprofits, 87% of funders). The use
of evaluation data to plan/revise programs
or initiatives and to plan/revise general strategies was also a
relatively high priority for both funders and
nonprofits. Where funders and nonprofits differ is using data
with each other: 93% of nonprofits use their data
to report to funders and 45% of funders use their data to report
to grantees.
Use of Evaluation Data
NONPROFITS FUNDERS
94% 87%
45%
51%
65%
20%
49%
93%
91%
57%
86%
52%
Report to Board of
Directors
Report to funders
Report to grantees
Plan/revise program
or initiatives
Plan/revise general
strategies
Advocacy or
influence
Share findings with
peers
75%
50%
92%
8%
of funders evaluate their work
of funders have evaluation staff
of nonprofits evaluate their work
of nonprofits have evaluation staff
Source: McCray (2014) Is Grantmaking Getting Smarter?
Source: Buteau & Coffman (2016) Benchmarking Foundation
Evaluation Practices. The Center for Effective Philanthropy and
the Center for Evaluation
Innovation surveyed 127 individuals who were the most senior
evaluation or program staff at their foundations. Foundations
included in the sample
were US-based independent foundations that provided $10
million or more in annual giving or were members of the Center
for Evaluation Innovation’s
Evaluation Roundtable. Only foundations that were known to
have staff with evaluation-related responsibilities were included
in the sample; therefore,
results are not generalizable.
Source: McCray (2014) Is Grantmaking Getting Smarter?
18
stateofevaluation.org
Staff within funding and nonprofit organizations both face
barriers to evaluation—but the barriers are different.
Evaluation barriers within nonprofits are mostly due to
prioritization and resource constraints, while barriers
within funding institutions speak to challenges capturing the
interest and involvement of program staff.
The audience for an evaluation may play a large role in
decisions such as the type of evaluation that is
conducted, the focus of the data that is collected, or the amount
of resources that are put into the evaluation.
The top three audiences for funders are internal staff: executive
staff, other internal staff (e.g., program staff),
and the Board of Directors. Internal audiences are two of the
top three audiences for nonprofits, but for
nonprofits, the external funding audience is also a top priority.
Peer organizations are a markedly lower
priority for both nonprofits and funders, which suggests
evaluation data and findings are primarily used for
internal purposes and are not shared to benefit other like-
minded organizations.
Audience for Evaluation
Evaluation Barriers
NONPROFITS’ AUDIENCE RANKINGS FUNDERS’
AUDIENCE RANKINGS
1 1
2 2
3 3
4 4
5 5
Board of Directors
Board of Directors
Internal executive staff
Internal executive staff
Funders
GranteesOther internal staff
Other internal staff
Peer organizations Peer organizations
79% Limited staff time 91% Program staff’s time
48% Limited staff
knowledge, tools, and/or
resources
50% Program staff’s
attitude towards evaluation
52% Insufficient financial
resources
71% Program staff’s
comfort in interpreting/
using data
25% Having staff who
did not believe in the
importance of evaluation
40% Program staff’s lack
of involvement in shaping
the evaluations conducted
Source: Buteau & Coffman (2016) Benchmarking Foundation
Evaluation Practices
Source: Buteau & Coffman (2016) Benchmarking Foundation
Evaluation Practices
NONPROFIT CHALLENGES FUNDER CHALLENGES
19
#SOE2016
Purpose. The State of Evaluation project is the first nationwide
project that systematically and repeatedly
collects data from US nonprofits about their evaluation
practices. Survey results are intended to build
understanding for nonprofits, to compare their evaluation
practices to their peers; for donors and funders,
to better understand how they can support evaluation practice
throughout the sector; and for evaluators, to
have more context about the existing evaluation practices and
capacities of their nonprofit clients.
Sample. The State of Evaluation 2016 sampling frame is
501(c)3 organizations in the US that updated their IRS
Form 990 in 2013 or more recently and provided an email
address in the IRS Form 990. Qualified organizations
were identified through a GuideStar dataset. We sent an
invitation to participate in the survey to one unique
email address for each of the 37,440 organizations in the
dataset. With 1,125 responses, our response
rate is 3.0%. The adjusted response rate for the survey is 8.35%
(removing from the denominator bounced
invitations, unopened invitations, and individuals who had
previously opted out from SurveyMonkey). The
survey was available online from June 13, 2016, through August
6, 2016, and six reminder messages were
sent during that time. Organizations were asked to answer
questions based on their experiences in 2015.
Analysis. Once the survey closed, the data was cleaned and
many relationships between responses were
tested for significance in SPSS. In State of Evaluation 2016, we
note which findings are statistically significant
and the significance level. Not all of the findings contained
herein are statistically significant, but we believe
that they have practical value to funders, nonprofits, and
evaluators and so are included in these results.
Representation. We compared organizations that participated in
SOE 2016 with The Urban Institute’s
Nonprofit Sector in Brief 2015. Our random sample had less
representation from organizations with budgets
under $499,999 than the sector as a whole, and more responses
from organizations with budgets of $500,000
and larger. One third (33%) of our sample was from the South
(14,314 organizations), 26% from the West
(11,204), 18% from the Midwest (7,830), and 23% from the
Northeast (9,866).
METHODOLOGY
Responses were mostly received from
executive staff (71% of 2016 survey
respondents), followed by fundraising
and development staff (11%),
administrative staff (4%), program
staff (4%), evaluation staff (3.5%),
Board Members (3%), financial staff
(1%), and other (3%). The top three
sources of funding for responding
organizations were individual donor
charitable contributions (27% of
organizations cited these as a funding
source), government grants and
contracts (26%), and foundation or
philanthropic charitable contributions
(20%).
50%
40%
30%
20%
10%
0%
Less than
$100,000
$100,000-
$499,999
$1m
-$4.9m
SOE Urban
$500,000-
$999,999
$5m or
more
6.4%
29.5%
27.4%
37.0%
21.6%
10.6%
30.1%
14.4%
11.9%
8.6%
Source: McKeever (2015) The Nonprofit Sector in Brief 2015:
Public Charities, Giving,
and Volunteering
20
stateofevaluation.org
ABOUT INNOVATION NETWORK
Innovation Network is a nonprofit evaluation, research, and
consulting firm. We provide
knowledge and expertise to help nonprofits and funders learn
from their work to improve
their results.
ABOUT THE AUTHORS
Johanna Morariu, Director, Innovation Network
Johanna is the Co-Director of Innovation Network and provides
organizational leadership,
project leadership, and evaluation expertise to funder and
nonprofit partners. She has over a
decade of experience in advocacy evaluation, including
advocacy field evaluation, campaign
evaluation, policy advocacy evaluation, organizing and social
movements evaluation, and
leadership development evaluation.
Kat Athanasiades, Senior Associate, Innovation Network
Kat leads and supports many of Innovation Network’s
engagements across the evaluation
spectrum—from evaluation planning to using analysis to advise
organizations on strategy.
She brings skills of evaluation design and implementation,
advocacy evaluation, evaluation
facilitation and training, data collection tool design, qualitative
and quantitative data analysis,
and other technical assistance provision.
Veena Pankaj, Director, Innovation Network
Veena is the Co-Director of Innovation Network and has more
than 15 years of experience
navigating organizations through the evaluation design and
implementation process. She
has an array of experience in the areas of health promotion,
advocacy evaluation and health
equity. Veena leads many of the organization’s consulting
projects and works closely with
foundations and nonprofits to answer questions about program
design, implementation,
and impact.
Deborah Grodzicki PhD, Senior Associate, Innovation Network
Deborah provides project leadership, backstopping, and staff
management in her role as
Senior Associate at Innovation Network. With nearly a decade
of experience in designing,
implementing, and managing evaluations, Deborah brings skills
of evaluation theory and
design, advocacy evaluation, quantitative and qualitative
methodology, and capacity
building.
October 2016
This work is licensed under the Creative Commons BY-
NC-SA License. It is attributed to Innovation Network, Inc.
202-728-0727
www.innonet.org
[email protected]
@InnoNet_Eval
Evaluation and Program Planning 58 (2016) 208–215
Original full length article
Barriers and facilitators to evaluation of health policies and
programs:
Policymaker and researcher perspectives
Carmen Huckel Schneidera,b,*, Andrew J. Milatc,d, Gabriel
Moorea
a Menzies Centre for Health Policy, University of Sydney,
Level 6 The Hub, Charles Perkins Centre D17, The University
of Sydney NSW 2006, Australia
b The Sax Institute, Level 13, Building 10, 235 Jones Street,
Ultimo NSW 2007, Australia
c New South Wales Ministry of Health, 73 Miller St, North
Sydney NSW 2060, Australia
d Sydney Medical School, University of Sydney, Edward Ford
Building (A27), Fisher Road, University of Sydney NSW 2006,
Australia
A R T I C L E I N F O
Article history:
Received 9 December 2015
Received in revised form 28 April 2016
Accepted 17 June 2016
Available online 11 July 2016
Keywords:
Policy evaluation
Health policy
Public health administration
Public health
A B S T R A C T
Our research sought to identify the barriers and facilitators
experienced by policymakers and evaluation
researchers in the critical early stages of establishing an
evaluation of a policy or program. We sought to
determine the immediate barriers experienced at the point of
initiating or commissioning evaluations
and how these relate to broader system factors previously
identified in the literature.
We undertook 17 semi-structured interviews with a purposive
sample of senior policymakers (n = 9)
and senior evaluation researchers (n = 8) in Australia.
Six themes were consistently raised by participants: political
influence, funding, timeframes, a ‘culture
of evaluation’, caution over anticipated results, and skills of
policy agency staff. Participants also reflected
on the dynamics of policy-researcher relationships including
different motivations, physical and
conceptual separation of the policy and researcher worlds,
intellectual property concerns, and trust.
We found that political and system factors act as macro level
barriers to good evaluation practice that
are manifested as time and funding constraints and contribute to
organisational cultures that can come to
fear evaluation. These factors then fed into meso and micro
level factors. The dynamics of policy-
researcher relationship provide a further challenge to evaluating
government policies and programs.
ã 2016 Elsevier Ltd. All rights reserved.
Contents lists available at ScienceDirect
Evaluation and Program Planning
journal homepage: www.elsevier.com/locat e/e valprogplan
1. Introduction
Previous research suggests that the number of public sector
policies and programs that are evaluated in industrialized
nations
is low (Oxman, Bjorndal, & Becerra-Posada, 2010) and that
policy
makers, who we define as those working in the public sector to
plan, design, implement and oversee public programming, face
significant hurdles in establishing and implementing policy and
program evaluation (Evans, Snooks, & Howson, 2013). These
challenges sit within what is known as the ‘messy’ business of
public policy; including political compromise, collaboration and
coordination amongst multiple stakeholders and strategic and
tactical decision-making, that have long been recognised in the
* Corresponding author. Present address: Menzies Centre for
Health Policy,
University of Sydney Level 6 The Hub, Charles Perkins Centre
D17, The University of
Sydney NSW 2006, Australia.
E-mail addresses: [email protected]
(C. Huckel Schneider), [email protected] (A.J. Milat),
[email protected] (G. Moore).
http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011
0149-7189/ã 2016 Elsevier Ltd. All rights reserved.
evaluation policy literature (Head & Alford, 2015; Parsons,
2002;
House, 2004).
There is however notable potential to utilise evaluation to draw
lessons about public health policies’ and programs’
effectiveness,
applicability and efficiency, all of which can contribute to a
better
evidence-base for informed decision making. This is
particularly
the case when implementation of a policy or program is studied
as
experimentation in practice (Petticrew et al., 2005; Craig,
Cooper,
Gunnell, Gunnell, & Gunnell, 2012; Banks, 2009; Palfrey,
Thomas, &
Phillips, 2012; Patton, 1997). There is also a growing
expectation
that evaluation should be used by policymakers to inform public
health policies and programs (Oxman et al., 2010; Chalmers,
2003;
NSW Government, 2013; Patton, 1997) and in theory, use of
evaluation should have synergies with principles of new public
management style public service, such as those widely adopted
in
countries such as United Kingdom, Australia, Scandinavia and
North America (Dahler-Larsen, 2009; Johnston, 2000).
There is a wide range of literature on policy and program
evaluation, including considerable debate on appropriate
evalua-
tion methodologies, (DeGroff & Cargo, 2009; Learmonth, 2000;
Rychetnik, 2010; Smith & Petticrew, 2010; White, 2010) and
the
http://crossmark.crossref.org/dialog/?doi=10.1016/j.evalprogpla
n.2016.06.011&domain=pdf
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011
http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011
http://www.sciencedirect.com/science/journal/01497189
www.elsevier.com/locate/evalprogplan
C. Huckel Schneider et al. / Evaluation and Program Planning
58 (2016) 208–215 209
challenges of evaluation research uptake (Lomas, 2000; Dobbins
et al., 2009; Chelimsky, 1987; Weiss, 1970). Amongst this
literature
there is awareness that evaluation of public health policies and
programs is complex, highly context dependent, plagued by
ideological debate and subject to ongoing challenges of
research-
er—policy collaboration (Flitcroft, Gillespie, Salkeld, Carter, &
Trevena, 2011; Ross, Lavis, & Rodriguez, 2003; Haynes et al.,
2012;
Petticrew, 2013). There are however gaps in the literature about
barriers and facilitators to undertaking quality evaluation at the
level of public service where evaluations tend to be initiated
and
commissioned, namely in middle management and by program
directors. There are also few studies that deliberately seek to
gather, compare and analyse perceived barriers from both policy
makers and evaluation researchers operating within the same
policy space.
In this paper, we aim to explore the early stages of policy and
program evaluation, including the decision to evaluate,
preparing
for evaluation, commissioning evaluations and how
policymakers
and evaluation researchers work together for the purpose of
high
quality evaluation undertaken as part of evaluation research.
Our
primary aim is to:
� Identify barriers and facilitators to evaluation at the initiation
and commissioning stages, from within both the policy making
as well as evaluation research environments.
To achieve this overarching aim, we address the following:
� How evaluations of health policies and programs are planned,
commissioned and undertaken in Australia;
� The barriers and facilitators experienced by policymakers and
evaluation researchers in the early stages of establishing an
evaluation of a policy or program and in undertaking commis-
sioned evaluation work; and
� Initiatives that may assist government to overcome barriers to
effective evaluation.
The study was designed and undertaken at the Sax Institute, an
independent research and knowledge translation institute
located
in Sydney Australia, to inform the development of programs to
assist and improve evaluation practice in health policy.
Box 1. Interview schedule questions.
Questions for policy makers
1. Could you please describe how policies and programs are eva
2. What factors influence decisions to undertake evaluations of
p
3. What factors act as barriers or facilitators to undertaking
qualit
4. What is the best way to engage with evaluators?
5. Now, thinking about commissioning external evaluations,
what
evaluator?
Questions for evaluation researchers
1. From your experience could you please describe how
governm
2. What is the quality of such evaluations?
3. What are the barriers and facilitators to undertaking good eva
programs?
4. What is the best way for policy makers to engage researchers/
5. What can policy makers do to effectively commission an
evalu
2. Methods
We undertook 17 semi-structured interviews with a purposive
sample of policymakers (n = 9) and evaluation researchers (n =
8) in
Australia. Policymakers had a minimum of 10 years of
experience
in a health policy or program delivery role in either state or
federal
government. They were also currently employed in a health
related
government department and had previously been involved with
the evaluation of government policies and programs. Evaluation
researchers had a minimum of eight years of experience in
undertaking evaluations of health or health related policies or
programs run by state or federal level government agencies in
Australia and were senior academics at a university.
Separate but similar interview schedules were developed for
policymakers and evaluation researchers. The interviews com-
prised five open-ended questions (see Box 1). The study was
approved by the Humanities Human Research Ethics Committee,
University of Sydney, Australia. Informed written consent was
obtained from all participants.
Policymakers were asked to reflect on their experiences of how
public health policies and programs are evaluated, decisions
about
whether or not to evaluate them, ways in which policymakers
engage or work with evaluation researchers and what would
help
them to commission an evaluation with an external evaluator.
Evaluation researchers were asked about their experiences of
how government health policies and programs are evaluated in
Australia, working with government agencies and asked about
what would facilitate the process of undertaking evaluations of
public policies and programs.
Sixteen interviews were voice-recorded and transcribed and
one was documented with detailed notes. We used thematic
analysis to draw conclusions from the texts.
We used thematic analysis as a realist method—assuming that
by-and-large the responses of the interviewees to questions
posed
reflect the experiences, meaning and reality of the participants.
Coding and identification of themes broadly followed the
method
of Braun and Clarke (2006) and proceeded in four steps.
First, texts were de-identified and applied with markers to
categorise policymakers (PM1-9) or evaluation researchers
(ER1-
8). Second, author CHS inductively coded eight texts (four from
each category) using a free coding process. Codes were collated
and
merged within parent codes where possible. Authors CHS and
GM
luated in your agency?
olicies or programs (in your experience)?
y evaluations?
would help you to commission an evaluation with an external
ent health policies and programs are evaluated?
luations of health or health-related government policies and
evaluators to evaluate government policies and programs?
ation?
210 C. Huckel Schneider et al. / Evaluation and Program
Planning 58 (2016) 208–215
then grouped parent codes into categories and compiled them in
a
proto-coding manual. Third, nine transcripts were distributed
amongst all authors, CHS, AM, GM to test-code using the
proto-
coding manual. Where necessary codes were supplemented
and/or
merged and re-categorised. The coding manual was finalised
comprising five main categories, each with several sub-
categories
of codes.
1) Planning and commissioning evaluations of policies and
programs
2) Relationships between policy makers and evaluation
research-
ers
3) Conducting evaluations of policies and programs
4) Utility of evaluation
5) Current evaluation practice
Fourth, author CHS re-coded all texts using the coding manual.
All authors then reviewed each code to interpret meaning and
emerging themes.
3. Results
Evaluations of government programs in Australia are generally
initiated and commissioned at the level of the program or policy
to be evaluated � though this may occur within higher level
evaluation frameworks. Program managers or directors of a
programmatic unit may take direct responsibility for setting
initial scope requirements of an evaluation and establishing a
committee structure to oversee competitive or selective–tender-
ing to an external evaluator. In some government agencies there
may be specific programmatic units that perform internal
evaluations or facilitate the process of commissioning
evaluations
and establishing partnerships with research institutions.
Program
level managers and staff usually take on the role of facilitating
access to data, site visits and other necessary permissions for
evaluation work to be undertaken. Models of engagement
between evaluation researchers and policy agencies vary from
case to case. Respondents in our interviews described a range of
‘contracting out’ models, such as competitive and selective
tender, pre-qualification/panel schemes as well as ‘collaborative
engagement’ models such as co-production and researcher
initiated evaluation. A discussion of these types of models of
engagement is reported on elsewhere [Reference removed in
blinded manuscript].
Here, we discuss results of the interviews that shed light on our
main aim: identifying barriers and facilitators to evaluation
from
within both the policy making as well as evaluation research
environments. A broad array of themes emerged from the
interviews that related to experiences that facilitated or acted as
a barrier to evaluation. Of these, six were consistently raised as
particularly important by a high number of study participants:
timeframes (15 participants), political influence (12
participants),
funding (10 participants), having a ‘culture of evaluation’
(9 participants), caution over anticipated evaluation outcomes
(8 participants) and skills and ability of policy agency staff
(8 participants).
3.1. Timeframes
Evaluation timeframes was the issue raised by more study
participants than any other. It acted as a barrier or facilitator in
three ways:
First, evaluations are often initiated late in the policy or
program implementation process. This can have several conse-
quences: funding to undertake evaluation can simply run out; or
the situation arises that no adequate data has been collected at
an
early stage of implementation that can act as a baseline. There
may
also simply be inadequate time to undertake data collection or
conduct analysis for rigorous evaluation. Many interviewees
found
that the consequence of starting late was to undertake small
evaluations limited in scope, often evaluating processes rather
than outcome. One interviewee reported on limiting the evalua-
tion of a complex intervention on childhood well-being to a
review
of relationships between stakeholders in the process.
“One of the things that makes it difficult is when something has
been implemented and then ‘oh, let’s do an evaluation’. So you
don’t have baseline data, it hasn’t been set up to exclude all
sorts of
other factors that may have brought about change” (ER7)
Second, timeframes are often too short to conduct an
evaluation. The vast majority of public health policies and
programs will require years to have a measureable impact on
health outcomes. For example, one interviewee had been
involved
in evaluation reporting for obesity prevention programs that had
to
focus on logic of assumptions to conduct the evaluation.
“But timeframe is another problem I think. Often, the process as
a
new program gets rolled out and it’s expected to have massive
impact somehow within a 12 month period and it’s just not a
realistic timeframe to be able to demonstrate change” (ER3)
The time required to design a complex evaluation and
undertake complex analysis is also beyond what is feasible in
many of the short timeframes of evaluations.
“I mean if you want to do a proper evaluation you can’t have a
turnaround time of one month or something like six weeks. You
know if you’re going to do proper focus groups or you know,
design
questionnaires . . . and I notice a lot of times they want this
quick
turnaround time” (PM1)
Furthermore, timeframes further limit who might be engaged
to undertake evaluations. Academic researchers are less likely
to be
able to dedicate time to an evaluation that needs to be done
quickly, especially if requiring the recruitment of staff.
“But one of the barriers for academic groups being part of it is
the
timeframes. So you know you’ll normally sort of have to put in
your
tender bid very quickly and then you normally have to start
work
on it instantaneously, and academic groups are not always able
to
be quite so nimble as that” (ER5)
Third, policymakers reported general lack of time between their
everyday tasks to dedicate to careful consideration of evaluation
planning, including program objectives and evaluation
questions.
“But because of the kind of immediacy of the job, and even if
the
intellectual capacity is there to really engage with the process
there
often isn’t the time. You know the writing of briefs, and
advising
people, and doing this and that. I know it is about getting the
right
balance . . . but it limits the time available to sit down and
really
nut out your questions” (PM4)
3.2. Political influence
Many participants had experienced political imperatives as a
major barrier to conducting evaluations; often citing it as a
more
significant issue than funding, timing or the knowledge and
skills
of agency staff.
“I think we tend to complain that there aren’t resources and that
there isn’t enough people around with the ability to do it. To me
those are important but they’re much less significant than the
political cloud that surrounds the whole business” (ER1).
Political imperatives included those situations where political
preferences influenced whether an evaluation goes ahead;
pressure from higher up to have a policy or program succeed or
C. Huckel Schneider et al. / Evaluation and Program Planning
58 (2016) 208–215 211
not; and any decisions based on the sensitivity of a policy or
program. Certain issue areas were more likely to be subject to
ideological debate and therefore political pressure than others;
as
was the case for certain approaches. In public health for
example,
debate between community vs individual responsibility as a
basic
paradigm that underlies policy decisions can make one issue
area
or policy response more politically sensitive than another. In
some
cases public health programs and policies can proceed at arms-
length to such debates, in other cases they can be heavily
influenced or weighed down by political imperatives and
uncertainty (Head & Alford, 2015).
Political influence was seen to be stronger at more senior levels
of government agencies. Election cycles reduced timeframes for
action and subsequently evaluation and reporting. The time
required for a policy or program to actually show an effect may
thus lie outside of the available timeframe required.
“Policymakers obviously work with a lot of constraints and
political constraints. Even when it is personally recognised that
they really need is five years to see if something is making a
difference they are working in an environment that doesn’t
allow
for that” (ER3)
Political ideology and experiences of highest level public
servants and elected officials were seen to trickle down into
government agencies determining what programs are prioritised
or placed under scrutiny.
“I wouldn’t do this—but it’s possible that if something’s
coming
under attack and . . . or is contentious and either the government
of the minister or the department thinks that it’s a valuable
program that they’ll construct an evaluation or research around
it
in such a way that it will deliver the answers they’re looking
for”
(PM7)
Political imperatives not only influenced if, when and with
what funding a policy or program might be evaluated, but
indirectly determined the focus of evaluations. For example,
there
may be a preference for measuring cost-utility rather than
population health outcomes. In some situations, it may be more
important to be immediately seen to be ‘doing something’ rather
than having an actual effect.
“If it is a politically motivated process, then the aim is to be
recognised for doing the program or having the policy. The
actual
‘Was it really effective?’ . . . I don’t think those questions are
always required or wanted” (ER2)
Both evaluation researchers and policymakers recognised the
frustration than can result when evaluation results and recom-
mendations formulated from them are not acted upon.
3.3. Funding
Inadequate funding for evaluation was generally seen to
compromise the quality of evaluations. It was also linked to
evaluations that were initiated late in the implementation phase
and sometimes after funding had run out.
“So barriers—always money . . . We used to occasionally find
out
some of the more complex analyses, and if you don’t have
money to
do that then it obviously stymies your ability to do some of the
more complex stuff” (PM6)
Interviewees did not however experience a lack of funding for
policies and programs per se, but rather that too little funding
was
allocated specifically for evaluation in the policy and program
design phase.
“They had money for the implementation of it but no money had
been set aside for the evaluation for example. It was how the
money
was packaged” (ER5)
Having evaluation written in to the program design from the
beginning was seen to increase the likelihood that adequate
funding for the evaluation would be available.
“If it’s written in that the program’s going to have an
evaluation,
then that would help, and if there’s resources identified to do it,
both money and people—and skills so the whole set” (PM9)
Lack of funding also influenced whether an evaluation would be
undertaken with external evaluators or undertaken in-house. In
situations where agency resources are scarce and there were
fewer
opportunities for staff to develop skills, the quality of
evaluations
was seen to be compromised, or at least limited in
methodological
scope.
3.4. Culture of evaluation
A culture of evaluation with a government agency refers to a
value being placed on evaluation in all decision making
processes
such as priority setting, program design and implementation,
and
funding (Dobbins et al., 2009). Both policymakers and
evaluation
researchers connected the extent to which such values were
present within a government agency as a key facilitator to good
evaluation practice. However, it was an area commonly
identified
for potential improvement.
“ . . . so I really think it just depends on whether people are
focused
on doing them or not” (ER1)
A culture of evaluation was seen to require a high degree of
motivation to evaluate policies and programs, and opportunities
to
share experiences, positive and negative, with colleagues. It was
also linked with having high level championing of evaluation
and a
staff mix with skills, experience and motivation to integrate
evaluation into standard practice.
“I mean it’s probably something to do with people that are in
decision-making roles I expect � how much of an emphasis they
place on evaluation and then how much they know about it”
(PM9)
A culture of evaluation was seen to have the potential to
counteract some of the nervousness surrounding evaluation. A
poor evaluation culture was characterised by a lack of support
for
government agency staff that were confused about how to
conduct
an evaluation.
3.5. Caution over anticipated outcomes
Several policymakers and evaluation researchers recalled
experiences where fear or worry of possible negative evaluation
results (ie. results showing that a program failed to have an
effect,
or had made a situation worse) acted as a barrier to evaluation.
This
fear appeared to act as a more significant barrier for large scale
programs or policies.
“not actually wanting to deal with the unpalatable truth then
obviously that’s a major barrier” (ER2)
In many cases, evaluations are undertaken specifically to
demonstrate positive results and participants reported that briefs
for external evaluators were written in a way that implies a
certain
result is expected.
“Of course there are some agendas behind evaluations, where
the
aim is to snow success rather than learn” (PM2)
This can influence the eventual mode of engagement with
external researchers; private consulting firms were often seen as
more likely to report positively on policies and programs, in a
way
that is less of a risk in terms of consequences of possible
negative
evaluation outcomes. It also led to situations where there was an
understanding that results of evaluation would not be published;
and in such cases there was a concern that quality of evaluations
212 C. Huckel Schneider et al. / Evaluation and Program
Planning 58 (2016) 208–215
was generally lower due to the absence of expectations of
public
scrutiny.
“Within government, and also most organisations, people would
like to see very positive stories coming out, and tend to be quite
wary of independent groups taking a look at what they are
doing.
Because you’ll never know what they’ll say” (ER7)
Interviewees postulated that the underlying cause of this
degree of caution was a need to demonstrate positive outcomes
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx
CEP • CEI     1BENCHMARKING FoundationEvaluationPra.docx

More Related Content

Similar to CEP • CEI 1BENCHMARKING FoundationEvaluationPra.docx

Evaluation seminar1
Evaluation seminar1Evaluation seminar1
Evaluation seminar1
Shanthosh Priyan
 
New Frameworks for Measuring Capacity and Assessing Performance
New Frameworks for Measuring Capacity and Assessing PerformanceNew Frameworks for Measuring Capacity and Assessing Performance
New Frameworks for Measuring Capacity and Assessing Performance
TCC Group
 
New trends in feasibility studies
New trends in feasibility studiesNew trends in feasibility studies
New trends in feasibility studiesWealthEngine
 
Anne Kazimirski 2013 Future Focus Workshop: Benifits of Monitoring & Evaluation
Anne Kazimirski 2013 Future Focus Workshop: Benifits of Monitoring & EvaluationAnne Kazimirski 2013 Future Focus Workshop: Benifits of Monitoring & Evaluation
Anne Kazimirski 2013 Future Focus Workshop: Benifits of Monitoring & Evaluation
Voluntary Action LeicesterShire
 
Running head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docx
Running head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docxRunning head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docx
Running head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docx
susanschei
 
USER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATUSER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATLenny Hidayat
 
Moving from evaluation to learning peter york
Moving from evaluation to learning peter yorkMoving from evaluation to learning peter york
Moving from evaluation to learning peter york
Michele Garvey
 
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...
OECD CFE
 
Planning for Monitoring, Learning and Evaluation
Planning for Monitoring, Learning and EvaluationPlanning for Monitoring, Learning and Evaluation
Planning for Monitoring, Learning and EvaluationNora Ferm Nickum
 
A Pulse of Predictive Analytics In Higher Education │ Civitas Learning
A Pulse of Predictive Analytics In Higher Education │ Civitas LearningA Pulse of Predictive Analytics In Higher Education │ Civitas Learning
A Pulse of Predictive Analytics In Higher Education │ Civitas Learning
Civitas Learning
 
How to talk about grant performance
How to talk about grant performanceHow to talk about grant performance
How to talk about grant performance
Daniel Hayden
 
Impact Management Principles. EVPA, European Venture Philanthropy Association
Impact Management Principles. EVPA,  European Venture Philanthropy AssociationImpact Management Principles. EVPA,  European Venture Philanthropy Association
Impact Management Principles. EVPA, European Venture Philanthropy Association
Dominique Gross
 
490The Future of EvaluationOrienting Questions1. H.docx
490The Future of EvaluationOrienting Questions1. H.docx490The Future of EvaluationOrienting Questions1. H.docx
490The Future of EvaluationOrienting Questions1. H.docx
blondellchancy
 
2.12 Creating a Yardstick: Developing a Performance Measurement System
2.12 Creating a Yardstick: Developing a Performance Measurement System2.12 Creating a Yardstick: Developing a Performance Measurement System
2.12 Creating a Yardstick: Developing a Performance Measurement SystemNational Alliance to End Homelessness
 
Monitoring and evaluation to improve fundraising bids
Monitoring and evaluation to improve fundraising bidsMonitoring and evaluation to improve fundraising bids
Monitoring and evaluation to improve fundraising bidsNatalie Blackburn
 
Methods Of Program Evaluation. Evaluation Research Is Offered
Methods Of Program Evaluation. Evaluation Research Is OfferedMethods Of Program Evaluation. Evaluation Research Is Offered
Methods Of Program Evaluation. Evaluation Research Is Offered
Jennifer Wood
 
Impact measurement and client wellbeing
Impact measurement and client wellbeingImpact measurement and client wellbeing
Impact measurement and client wellbeing
FRSA Communications
 
Honorhealth Case Study
Honorhealth Case StudyHonorhealth Case Study
Honorhealth Case Study
Sonia Sanchez
 

Similar to CEP • CEI 1BENCHMARKING FoundationEvaluationPra.docx (20)

Evaluation seminar1
Evaluation seminar1Evaluation seminar1
Evaluation seminar1
 
New Frameworks for Measuring Capacity and Assessing Performance
New Frameworks for Measuring Capacity and Assessing PerformanceNew Frameworks for Measuring Capacity and Assessing Performance
New Frameworks for Measuring Capacity and Assessing Performance
 
New trends in feasibility studies
New trends in feasibility studiesNew trends in feasibility studies
New trends in feasibility studies
 
Bettinger Keynote: The Difficulty of Knowing and The "E" Word
Bettinger Keynote: The Difficulty of Knowing and The "E" WordBettinger Keynote: The Difficulty of Knowing and The "E" Word
Bettinger Keynote: The Difficulty of Knowing and The "E" Word
 
Anne Kazimirski 2013 Future Focus Workshop: Benifits of Monitoring & Evaluation
Anne Kazimirski 2013 Future Focus Workshop: Benifits of Monitoring & EvaluationAnne Kazimirski 2013 Future Focus Workshop: Benifits of Monitoring & Evaluation
Anne Kazimirski 2013 Future Focus Workshop: Benifits of Monitoring & Evaluation
 
Running head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docx
Running head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docxRunning head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docx
Running head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docx
 
USER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYATUSER GUIDE M&E 2014 LENNY HIDAYAT
USER GUIDE M&E 2014 LENNY HIDAYAT
 
Moving from evaluation to learning peter york
Moving from evaluation to learning peter yorkMoving from evaluation to learning peter york
Moving from evaluation to learning peter york
 
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...
 
Planning for Monitoring, Learning and Evaluation
Planning for Monitoring, Learning and EvaluationPlanning for Monitoring, Learning and Evaluation
Planning for Monitoring, Learning and Evaluation
 
A Pulse of Predictive Analytics In Higher Education │ Civitas Learning
A Pulse of Predictive Analytics In Higher Education │ Civitas LearningA Pulse of Predictive Analytics In Higher Education │ Civitas Learning
A Pulse of Predictive Analytics In Higher Education │ Civitas Learning
 
How to talk about grant performance
How to talk about grant performanceHow to talk about grant performance
How to talk about grant performance
 
Impact Management Principles. EVPA, European Venture Philanthropy Association
Impact Management Principles. EVPA,  European Venture Philanthropy AssociationImpact Management Principles. EVPA,  European Venture Philanthropy Association
Impact Management Principles. EVPA, European Venture Philanthropy Association
 
IDEA Best Practice Chairs & Admin
IDEA Best Practice Chairs & AdminIDEA Best Practice Chairs & Admin
IDEA Best Practice Chairs & Admin
 
490The Future of EvaluationOrienting Questions1. H.docx
490The Future of EvaluationOrienting Questions1. H.docx490The Future of EvaluationOrienting Questions1. H.docx
490The Future of EvaluationOrienting Questions1. H.docx
 
2.12 Creating a Yardstick: Developing a Performance Measurement System
2.12 Creating a Yardstick: Developing a Performance Measurement System2.12 Creating a Yardstick: Developing a Performance Measurement System
2.12 Creating a Yardstick: Developing a Performance Measurement System
 
Monitoring and evaluation to improve fundraising bids
Monitoring and evaluation to improve fundraising bidsMonitoring and evaluation to improve fundraising bids
Monitoring and evaluation to improve fundraising bids
 
Methods Of Program Evaluation. Evaluation Research Is Offered
Methods Of Program Evaluation. Evaluation Research Is OfferedMethods Of Program Evaluation. Evaluation Research Is Offered
Methods Of Program Evaluation. Evaluation Research Is Offered
 
Impact measurement and client wellbeing
Impact measurement and client wellbeingImpact measurement and client wellbeing
Impact measurement and client wellbeing
 
Honorhealth Case Study
Honorhealth Case StudyHonorhealth Case Study
Honorhealth Case Study
 

More from sleeperharwell

For this assignment, review the articleAbomhara, M., & Koie.docx
For this assignment, review the articleAbomhara, M., & Koie.docxFor this assignment, review the articleAbomhara, M., & Koie.docx
For this assignment, review the articleAbomhara, M., & Koie.docx
sleeperharwell
 
For this assignment, provide your perspective about Privacy versus N.docx
For this assignment, provide your perspective about Privacy versus N.docxFor this assignment, provide your perspective about Privacy versus N.docx
For this assignment, provide your perspective about Privacy versus N.docx
sleeperharwell
 
For this assignment, provide your perspective about Privacy vers.docx
For this assignment, provide your perspective about Privacy vers.docxFor this assignment, provide your perspective about Privacy vers.docx
For this assignment, provide your perspective about Privacy vers.docx
sleeperharwell
 
For this Assignment, read the case study for Claudia and find two to.docx
For this Assignment, read the case study for Claudia and find two to.docxFor this Assignment, read the case study for Claudia and find two to.docx
For this Assignment, read the case study for Claudia and find two to.docx
sleeperharwell
 
For this assignment, please start by doing research regarding the se.docx
For this assignment, please start by doing research regarding the se.docxFor this assignment, please start by doing research regarding the se.docx
For this assignment, please start by doing research regarding the se.docx
sleeperharwell
 
For this assignment, please discuss the following questionsWh.docx
For this assignment, please discuss the following questionsWh.docxFor this assignment, please discuss the following questionsWh.docx
For this assignment, please discuss the following questionsWh.docx
sleeperharwell
 
For this assignment, locate a news article about an organization.docx
For this assignment, locate a news article about an organization.docxFor this assignment, locate a news article about an organization.docx
For this assignment, locate a news article about an organization.docx
sleeperharwell
 
For this assignment, it requires you Identifies the historic conte.docx
For this assignment, it requires you Identifies the historic conte.docxFor this assignment, it requires you Identifies the historic conte.docx
For this assignment, it requires you Identifies the historic conte.docx
sleeperharwell
 
For this assignment, create a framework from which an international .docx
For this assignment, create a framework from which an international .docxFor this assignment, create a framework from which an international .docx
For this assignment, create a framework from which an international .docx
sleeperharwell
 
For this assignment, create a 15-20 slide digital presentation in tw.docx
For this assignment, create a 15-20 slide digital presentation in tw.docxFor this assignment, create a 15-20 slide digital presentation in tw.docx
For this assignment, create a 15-20 slide digital presentation in tw.docx
sleeperharwell
 
For this assignment, you are to complete aclinical case - narrat.docx
For this assignment, you are to complete aclinical case - narrat.docxFor this assignment, you are to complete aclinical case - narrat.docx
For this assignment, you are to complete aclinical case - narrat.docx
sleeperharwell
 
For this assignment, you are to complete aclinical case - narr.docx
For this assignment, you are to complete aclinical case - narr.docxFor this assignment, you are to complete aclinical case - narr.docx
For this assignment, you are to complete aclinical case - narr.docx
sleeperharwell
 
For this assignment, you are provided with four video case studies (.docx
For this assignment, you are provided with four video case studies (.docxFor this assignment, you are provided with four video case studies (.docx
For this assignment, you are provided with four video case studies (.docx
sleeperharwell
 
For this assignment, you are going to tell a story, but not just.docx
For this assignment, you are going to tell a story, but not just.docxFor this assignment, you are going to tell a story, but not just.docx
For this assignment, you are going to tell a story, but not just.docx
sleeperharwell
 
For this assignment, you are asked to prepare a Reflection Paper. Af.docx
For this assignment, you are asked to prepare a Reflection Paper. Af.docxFor this assignment, you are asked to prepare a Reflection Paper. Af.docx
For this assignment, you are asked to prepare a Reflection Paper. Af.docx
sleeperharwell
 
For this assignment, you are asked to prepare a Reflection Paper. .docx
For this assignment, you are asked to prepare a Reflection Paper. .docxFor this assignment, you are asked to prepare a Reflection Paper. .docx
For this assignment, you are asked to prepare a Reflection Paper. .docx
sleeperharwell
 
For this assignment, you are asked to conduct some Internet research.docx
For this assignment, you are asked to conduct some Internet research.docxFor this assignment, you are asked to conduct some Internet research.docx
For this assignment, you are asked to conduct some Internet research.docx
sleeperharwell
 
For this assignment, you are a professor teaching a graduate-level p.docx
For this assignment, you are a professor teaching a graduate-level p.docxFor this assignment, you are a professor teaching a graduate-level p.docx
For this assignment, you are a professor teaching a graduate-level p.docx
sleeperharwell
 
For this assignment, we will be visiting the PBS website,Race  .docx
For this assignment, we will be visiting the PBS website,Race  .docxFor this assignment, we will be visiting the PBS website,Race  .docx
For this assignment, we will be visiting the PBS website,Race  .docx
sleeperharwell
 
For this assignment, the student starts the project by identifying a.docx
For this assignment, the student starts the project by identifying a.docxFor this assignment, the student starts the project by identifying a.docx
For this assignment, the student starts the project by identifying a.docx
sleeperharwell
 

More from sleeperharwell (20)

For this assignment, review the articleAbomhara, M., & Koie.docx
For this assignment, review the articleAbomhara, M., & Koie.docxFor this assignment, review the articleAbomhara, M., & Koie.docx
For this assignment, review the articleAbomhara, M., & Koie.docx
 
For this assignment, provide your perspective about Privacy versus N.docx
For this assignment, provide your perspective about Privacy versus N.docxFor this assignment, provide your perspective about Privacy versus N.docx
For this assignment, provide your perspective about Privacy versus N.docx
 
For this assignment, provide your perspective about Privacy vers.docx
For this assignment, provide your perspective about Privacy vers.docxFor this assignment, provide your perspective about Privacy vers.docx
For this assignment, provide your perspective about Privacy vers.docx
 
For this Assignment, read the case study for Claudia and find two to.docx
For this Assignment, read the case study for Claudia and find two to.docxFor this Assignment, read the case study for Claudia and find two to.docx
For this Assignment, read the case study for Claudia and find two to.docx
 
For this assignment, please start by doing research regarding the se.docx
For this assignment, please start by doing research regarding the se.docxFor this assignment, please start by doing research regarding the se.docx
For this assignment, please start by doing research regarding the se.docx
 
For this assignment, please discuss the following questionsWh.docx
For this assignment, please discuss the following questionsWh.docxFor this assignment, please discuss the following questionsWh.docx
For this assignment, please discuss the following questionsWh.docx
 
For this assignment, locate a news article about an organization.docx
For this assignment, locate a news article about an organization.docxFor this assignment, locate a news article about an organization.docx
For this assignment, locate a news article about an organization.docx
 
For this assignment, it requires you Identifies the historic conte.docx
For this assignment, it requires you Identifies the historic conte.docxFor this assignment, it requires you Identifies the historic conte.docx
For this assignment, it requires you Identifies the historic conte.docx
 
For this assignment, create a framework from which an international .docx
For this assignment, create a framework from which an international .docxFor this assignment, create a framework from which an international .docx
For this assignment, create a framework from which an international .docx
 
For this assignment, create a 15-20 slide digital presentation in tw.docx
For this assignment, create a 15-20 slide digital presentation in tw.docxFor this assignment, create a 15-20 slide digital presentation in tw.docx
For this assignment, create a 15-20 slide digital presentation in tw.docx
 
For this assignment, you are to complete aclinical case - narrat.docx
For this assignment, you are to complete aclinical case - narrat.docxFor this assignment, you are to complete aclinical case - narrat.docx
For this assignment, you are to complete aclinical case - narrat.docx
 
For this assignment, you are to complete aclinical case - narr.docx
For this assignment, you are to complete aclinical case - narr.docxFor this assignment, you are to complete aclinical case - narr.docx
For this assignment, you are to complete aclinical case - narr.docx
 
For this assignment, you are provided with four video case studies (.docx
For this assignment, you are provided with four video case studies (.docxFor this assignment, you are provided with four video case studies (.docx
For this assignment, you are provided with four video case studies (.docx
 
For this assignment, you are going to tell a story, but not just.docx
For this assignment, you are going to tell a story, but not just.docxFor this assignment, you are going to tell a story, but not just.docx
For this assignment, you are going to tell a story, but not just.docx
 
For this assignment, you are asked to prepare a Reflection Paper. Af.docx
For this assignment, you are asked to prepare a Reflection Paper. Af.docxFor this assignment, you are asked to prepare a Reflection Paper. Af.docx
For this assignment, you are asked to prepare a Reflection Paper. Af.docx
 
For this assignment, you are asked to prepare a Reflection Paper. .docx
For this assignment, you are asked to prepare a Reflection Paper. .docxFor this assignment, you are asked to prepare a Reflection Paper. .docx
For this assignment, you are asked to prepare a Reflection Paper. .docx
 
For this assignment, you are asked to conduct some Internet research.docx
For this assignment, you are asked to conduct some Internet research.docxFor this assignment, you are asked to conduct some Internet research.docx
For this assignment, you are asked to conduct some Internet research.docx
 
For this assignment, you are a professor teaching a graduate-level p.docx
For this assignment, you are a professor teaching a graduate-level p.docxFor this assignment, you are a professor teaching a graduate-level p.docx
For this assignment, you are a professor teaching a graduate-level p.docx
 
For this assignment, we will be visiting the PBS website,Race  .docx
For this assignment, we will be visiting the PBS website,Race  .docxFor this assignment, we will be visiting the PBS website,Race  .docx
For this assignment, we will be visiting the PBS website,Race  .docx
 
For this assignment, the student starts the project by identifying a.docx
For this assignment, the student starts the project by identifying a.docxFor this assignment, the student starts the project by identifying a.docx
For this assignment, the student starts the project by identifying a.docx
 

Recently uploaded

Chapter -12, Antibiotics (One Page Notes).pdf
Chapter -12, Antibiotics (One Page Notes).pdfChapter -12, Antibiotics (One Page Notes).pdf
Chapter -12, Antibiotics (One Page Notes).pdf
Kartik Tiwari
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
Multithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race conditionMultithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race condition
Mohammed Sikander
 
Marketing internship report file for MBA
Marketing internship report file for MBAMarketing internship report file for MBA
Marketing internship report file for MBA
gb193092
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
thanhdowork
 
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBCSTRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
kimdan468
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
deeptiverma2406
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 

Recently uploaded (20)

Chapter -12, Antibiotics (One Page Notes).pdf
Chapter -12, Antibiotics (One Page Notes).pdfChapter -12, Antibiotics (One Page Notes).pdf
Chapter -12, Antibiotics (One Page Notes).pdf
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
Multithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race conditionMultithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race condition
 
Marketing internship report file for MBA
Marketing internship report file for MBAMarketing internship report file for MBA
Marketing internship report file for MBA
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
 
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBCSTRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 

CEP • CEI 1BENCHMARKING FoundationEvaluationPra.docx

  • 1. CEP • CEI | 1 BENCHMARKING Foundation Evaluation Practices 2 3 FOR MORE INFORMATION, CONTACT Ellie Buteau, Ph.D. Vice President – Research 617-492-0800 ext. 213 [email protected] Julia Coffman Director, Center for Evaluation Innovation 202-728-0727 ext. 116 [email protected] AUTHORS This report was prepared by the Center for Effective Philanthropy (CEP) together with the Center for Evaluation Innovation (CEI). A number of people contributed to the development of the survey instrument, analysis of data, and creation of this research report. From CEP: Ellie Buteau, Jennifer Glickman, and Charis Loh. From CEI: Julia Coffman and Tanya Beer.
  • 2. This work is licensed under the Creative Commons BY-NC-ND License. © 2016. The Center for Effective Philanthropy, Inc. All rights reserved. CEP • CEI | 5 TABLE OF CONTENTS Respondent Demographics 8 Foundation Demographics 11 Evaluation Practices 17 Using Evaluation Information 25 Looking Forward 32 Discussion Questions 36 Methodology 39 ABOUT THE CENTER FOR EFFECTIVE PHILANTHROPY ACKNOWLEDGEMENTS We are very appreciative of the support that made this work possible. We are grateful to Marc Holley, Kelly Hunt, Ed Pauly, Diana Scearce, Nadya Shmavonian, and Fay Twersky for providing feedback on a
  • 3. draft of the survey used for this research. The survey created for this research study drew, in part, from a survey originally created by Patti Patrizi and Elizabeth Heid Thompson in 2009. We are also grateful to Johanna Morariu for providing feedback on a draft of this report. The authors would like to thank CEP’s President, Phil Buchanan, for his contributions to this research, as well as CEP’s Art Director, Sara Dubois, for her design of the report. This research is based on CEP and CEI’s independent data analyses, and CEP and CEI are solely responsible for its content. The report does not necessarily reflect the individual views of the funders, advisers, or others listed throughout this report. For more information on CEP, please visit www.effectivephilanthropy.org. For more information on CEI, please visit www.evaluationinnovation.org and www.evaluationroundtable.org. ABOUT THE CENTER FOR EVALUATION INNOVATION MISSION To provide data and create insight so philanthropic funders can better define, assess, and improve their effectiveness— and, as a result, their intended impact.
  • 4. MISSION Our aim is to push philanthropic and nonprofit evaluation practice in new directions and into new arenas. We specialize in areas that are challenging to assess, such as advocacy and systems change. CEP • CEI | 7 In 2015 and 2016, the Center for Effective Philanthropy and the Center for Evaluation Innovation partnered for the first time to benchmark current evaluation practices at foundations. We wanted to understand evaluation functions and staff roles at foundations, the relationship between evaluation and foundation strategy, the level of investment in and support of evaluation work, the specific evaluation activities foundations engage in, and the usefulness and use of evaluation information once it is collected. To explore these topics, we collected survey data from 127 individuals who were the most senior evaluation or program staff at their foundations (see Methodology). These individuals came from independent and community foundations giving at least $10 million annually, or foundations that were members of the Evaluation Roundtable—a network of foundation evaluation leaders who
  • 5. seek to support and improve evaluation practice in philanthropy. The result of this effort is what we believe to be the most comprehensive review ever undertaken of evaluation practices at foundations. It is our hope that the data presented in this report will help you and your foundation determine what evaluation systems and practices align best with your foundation’s strategy, culture, and ultimate mission. What resources should you invest in evaluation? On what should your evaluation efforts focus? How can you learn from and use evaluation information? We believe that considering these questions in light of this benchmarking data can allow you to more thoughtfully answer these questions. Ultimately, we hope the information in this report helps you prepare your foundation to better assess its progress toward its goals and its overall performance. We hope you find this data useful. Sincerely, Ellie & Julia September 2016 Julia Coffman
  • 6. Director Center for Evaluation Innovation Ellie Buteau, Ph.D. Vice President – Research Center for Effective Philanthropy Dear Colleague, In the survey, we defined evaluation and/or evaluation-related activities as activities undertaken to systematically assess and learn about the foundation’s work, above and beyond final grant or finance reporting, monitoring, and standard due diligence practices. DEFINITION OF EVALUATION USED IN THIS STUDY | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 8 9 Respondent Demographics Thirty-eight percent of respondents have had responsibility for evaluation-related activities at the foundation for two years or less. <1 year 13%
  • 7. 1-2 years 25% 6-8 years 14% ≥ 9 years 18% 3-5 years 30% 38% Of respondents: CEO report to the CEO/President 62% 23% SENIORPROGRAM STAFF report to senior or executive level program staff are evaluation staff 58% are program staff 35%PROGRAM STAFFEVALUATION STAFF ROLE AND TENURE | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 10 11
  • 8. 45% have an advanced degree in the social sciences or applied research 37% have received training in evaluation through workshops or short courses 5% have an advanced degree in evaluation Foundation DemographicsPREVIOUS EVALUATION TRAINING Of respondents: | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 12 13 of foundations have a dedicated evaluation unit or department, separate from the program department Of these departments: 34% have had their name changed in the past two years 21% were newly created during the past two years
  • 9. 19% have their own grantmaking and/or contracting budget 79% of foundations do not have a dedicated evaluation unit or department Of respondents at these foundations: 66% work in program departments 89% work in operations or administration departments 20% work in the President’s office or executive office 19% EVALUATION DEPARTMENT PROGRAM PRESIDENT'S /EXECUTIVE
  • 10. OFFICE OPERATIONS/ ADMINISTRATION ORGANIZATIONAL STRUCTURE Larger foundations are more likely to have a dedicated evaluation unit or department.1 Common department names include: Evaluation Evaluation and Learning Research and Evaluation Research, Evaluation, and Learning Learning and Impact 1 A chi-square analysis was conducted between whether or not foundations have asset sizes greater than the median in our sample and whether or not foundations have a dedicated evaluation department. A statistical difference of a medium effect size was found. A chi-square analysis was also conducted between whether or not foundations give more than the median annual giving amount in
  • 11. our sample and whether or not those foundations have a dedicated evaluation department. Again, a statistical difference of a medium effect size was found. | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 14 15 About half of foundations have 1.5 full-time equivalent (FTE) staff or more regularly dedicated to evaluation work. About half of foundations spend $200,000 or more on evaluation (in U.S. dollars).3 About one-quarter of foundations spend $40,000 or less on evaluation. About one-quarter of foundations spend $1 million or more on evaluation. Thirty-five percent are quite or extremely confident in the dollar estimate they provided. Staff with evaluation-related responsibilities: $200k $40k 35%
  • 12. $1M direct and manage all or most work related to evaluation at 45% of foundations provide advice and coaching to other foundation staff who manage all or most work related to evaluation at 21% of foundations hire third parties to direct and manage all or most work related to evaluation on behalf of the foundation at 14% of foundations T H I R D PA RT Y 2 An independent samples t-test indicated that foundations with asset sizes greater than the median foundation in our sample were more likely to have a greater number of evaluation staff. This
  • 13. statistical difference was of a medium effect size. Larger foundations tend to have more staff regularly dedicated to evaluation work.2 For every 10 program staff members, the median foundation has about one FTE staff member regularly dedicated to evaluation work. EVALUATION STAFFING MODELS OF HOW EVALUATION RESPONSIBILITIES ARE MANAGED 3 In the survey, we did not put parameters around what respondents should or should not include in the dollar value they provided. Respondents were told it is understandable that it may be difficult to give a precise number, but to provide their best estimate. EVALUATION SPENDING perceive that funding levels for evaluation work at their foundation have stayed about the same relative to the size of the program budget over the past two years 45% perceive that funding levels for evaluation work at their foundation increased relative to the size of the program budget over the past two years
  • 14. 50% Of respondents: | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 16 17 Half of respondents report that most or all of their grantmaking is proactive (e.g., the foundation identifies and requests proposals from organizations or programs that target specific issues or are a good fit with foundation initiatives and strategies). About one-quarter of respondents report that most or all of their grantmaking is responsive (e.g., driven by unsolicited requests from grant seekers). 50% 27% PROACTIVE RESPONSIVE Evaluation Practices
  • 15. GRANTMAKING | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 18 19 PRIORITIZATION OF EVALUATION ACTIVITIES respondents spending time on the activity who say it is a top priority respondents spending time on the activity Provide research or data to inform grantmaking strategy 90% 35% Evaluate foundation initiatives or strategies 88% 51% Develop grantmaking strategy 86% 34% Design and/or facilitate learning processes or events within the foundation 79% 27%
  • 16. Compile and/or monitor metrics to measure foundation performance 71% 33% Evaluate individual grants 71% 34% Design and/or facilitate learning processes or events with grantees or other external stakeholders 70% 15% Improve grantee capacity for data collection or evaluation 69% 14% Conduct/commission satisfaction/perception surveys (of grantees or other stakeholders) 7% 60% Disseminate evaluation findings externally 9% 57%
  • 17. Refine grantmaking strategy during implementation 87% 26% Half of respondents report spending time on at least nine of these activities. | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 20 21 CHALLENGES INVESTMENT IN EVALUATION ACTIVITIES Percentage of respondents who say the following practices have been at least somewhat challenging in their foundation’s evaluation efforts4 Percentage of respondents who say their foundation invests too little in the following evaluation activities Having evaluations result in meaningful insights for the foundation 76% Incorporating evaluation results into the way the foundation will approach its work in the future
  • 18. 70% 76% Having evaluations result in useful lessons for grantees 82% 4 Respondents were asked to rate how challenging each of the practices has been to their foundation’s evaluation efforts on a 1-5 scale, where 1 = ‘Not at all challenging,” 2 = ‘Not very challenging,” 3 = ‘Somewhat challenging,’ 4 = ‘Quite challenging,’ and 5 = ‘Extremely challenging.’ The percentages included above represent respondents who rated a 3, 4, or 5 on an item. Identifying third party evaluators that produce high quality work 59% Having evaluations result in useful lessons for the field 83%Disseminating evaluation findings externally 71% Improving grantee capacity for data collection or evaluation 69% Designing and/or facilitating learning processes or events with grantees or other external stakeholders 58% Compiling and/or monitoring metrics to measure foundation performance 55%
  • 19. Designing and/or facilitating learning processes or events within the foundation 48% Evaluating foundation initiatives or strategies 44% Providing research or data to inform grantmaking strategy 42% Refining grantmaking strategy during implementation 39% Developing grantmaking strategy 26% Evaluating individual grants 22% Conducting/commissioning satisfaction/perception surveys (of grantees or other stakeholders) 41% Allocating sufficient monetary resources for evaluation efforts 63% Having foundation staff and grantees agree on the goals of the evaluation 36% Having programmatic staff and third party evaluators agree on the goals of the evaluation
  • 20. 31% | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 22 23 MOST COMMON APPROACHES TO FUNDING GRANTEES' EVALUATION EFFORTS of foundations have no common approach to evaluating grants because funding evaluation efforts differs widely across the foundation’s program or strategy areas 41% of respondents report that grantees can spend a portion of their grant dollars on evaluation if they request to do so 19% of respondents report that grantees receive general operating support dollars, and they can choose to use these dollars for evaluation5 10% of respondents say the foundation commissions outside evaluators to evaluate individual
  • 21. grantees’ work 12% 5 All other response options for this item were selected by fewer than 10 percent of respondents and not shown here. Percentage of individual grants funded for evaluation Almost two-thirds of respondents say their foundations fund evaluations for less than 10 percent of individual grants. none less than 10% 10% to 25% 26% to 50% 51% to 75% more than 75% 63% 13% found it quite or extremely useful in providing evidence for the field about what does and does not work
  • 22. found it quite or extremely useful in future grantmaking decisions found it quite or extremely useful in understanding the impact the foundation’s grant dollars are making found it quite or extremely useful in refining foundation strategies or initiatives Of those who have provided funding for a randomized control trial: 63% 38% 42% 25% RANDOMIZED CONTROL TRIALS About one-fifth of respondents say their foundations have provided funding for a randomized control trial of their grantees’ work in the past three years. 19%
  • 23. | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 24 25 TYPES OF EVALUATION Over 40 percent of respondents say their foundation has engaged in efforts to coordinate its evaluation work with other funders working in the same issue areas. Yes, we are already engaged in such efforts 28% 6% 24% 42% Evaluation Type Grantees’ Work Regularly Occasionally Never Summative 20% 52% 28% Formative 15% 53% 31% Developmental 10% 46% 44% Evaluation Type Foundation Initiatives or Strategies Regularly Occasionally Never Summative 25% 53% 22% Formative 20% 57% 23% Developmental 22% 36% 42% No, but we are currently considering such efforts No, we considered it but concluded it was not right for us No, we have not considered engaging in
  • 24. any such efforts COLLABORATION Using Evaluation Information Frequency with which different types of evaluations are conducted on grantees’ work and foundation initiatives or strategies | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 26 27 Grantee organizations it seeks to affect 46% Fields it seeks to affect 35% Communities it seeks to affect 22% Ultimate beneficiaries it seeks to affect 20% UNDERSTANDING CHALLENGES Percentage of respondents who say each of the following is a challenge for program staff’s use of information collected through, or resulting from, evaluation work
  • 25. Program staff's attitudes toward evaluation 50% Program staff's lack of involvement in shaping the evaluations conducted 40% Program staff's level of comfort in interpreting/using data 71% Program staff's time 91% Percentage of respondents who believe their foundation understands quite or very accurately what it has accomplished through its work, when it comes to each of the following | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 28 29 Percentage of respondents who say program staff are likely to use information collected through, or resulting from, evaluations to inform the following aspects of their work When a respondent says the foundation's senior management engages less than the appropriate amount in evaluation, the foundation is significantly more likely to experience the following evaluation challenges:
  • 26. USE OF INFORMATION Decide whether to adjust grantmaking strategies during implementation 74% Decide whether to renew grantees’ funding 71% 76% Decide whether to expand into new program areas or exit program areas 76% Hold grantees accountable to the goals of their grants 57% Understand what the foundation has accomplished through its work 80% Strengthen grantee organizations’ future performance 63% Communicate publicly about what the foundation has learned through its work 56% Decide whether to award a first grant 40% LEVEL OF ENGAGEMENT OF SENIOR MANAGEMENT
  • 27. LESS THAN HALF of respondents say senior management engages the appropriate amount in supporting adequate investment in the evaluation capacity of grantees. 1% 44%39%16% LESS THAN HALF of respondents say senior management engages the appropriate amount in considering the results of evaluation work as an important criterion when assessing staff. 43%31%26% ABOUT HALF of respondents say senior management engages the appropriate amount in modeling the use of information resulting from evaluation work in decision making. 52%39%9% OVER TWO-THIRDS of respondents say senior management engages the appropriate amount in communicating to staff that it values the use of evaluation and evaluative information. 2% 68%24%6% No engagement
  • 28. Appropriate amount of engagement Too little engagement Too much engagement ► Allocating sufficient monetary resources for evaluation efforts ► Incorporating evaluation results into future work ► Having evaluations result in useful lessons for the field | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 30 31 LEVEL OF SUPPORT FROM BOARD ALMOST 40 PERCENT of respondents say there is a high level of board support for the use of evaluation or evaluative data in board-level decision making. 1% 38%39%22% ABOUT HALF of respondents say there is a high level of board support for the use of evaluation or evaluative data in decision making by staff at the foundation. 2% 52%40%6%
  • 29. FORTY PERCENT of respondents say there is a high level of board support for the role of evaluation staff at the foundation. 40%39%14%7% No support Moderate support Little support High support ONLY ONE-THIRD of respondents say there is a high level of board support for foundation spending on evaluation. 3% 34%44%19% When a foundation's board is less supportive of evaluation, the foundation is significantly more likely to experience the following evaluation challenges: ► Allocating sufficient monetary resources for evaluation efforts ► Having evaluations result in meaningful insights ► Incorporating evaluation results into future work ► Having foundation staff and grantees agree on evaluation goals ► Having evaluations result in useful lessons for grantees ► Having evaluations result in useful lessons for the field Percentage of respondents who say
  • 30. evaluation findings are shared with the following audiences quite a bit or a lot SHARING INFORMATION Foundation’s grantees 28% Foundation’s CEO 77% Other foundations 17% Foundation’s board 47% Foundation’s staff 66% General public 14% | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 32 33 Looking Forward Foundations will be more strategic in the way they plan for and design evaluations so that information collected is meaningful and useful.
  • 31. THE TOP THREE CHANGES EVALUATION STAFF HOPE TO SEE IN FIVE YEARS⁶ Implement more strategic evaluation designs to measure initiatives and key areas of investment. Develop clear strategies and goals for what [the foundation] hopes to measure and assess. My sole wish is that evaluation data is meaningful−that it is actually linked to strategy. 1 6 Of evaluation staff who responded to our survey, 74 percent, or 94 of 127 respondents, answered the open-ended question, “In five years, what do you hope will have changed for foundations in the collection and/or use of evaluation data or information?” | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 34 35 2 Use evaluation deliverables to
  • 32. inform decisions that improve our foundation and grantee performance. More and more effective use of evaluative data and information for the purpose of learning and improvement for foundations. I would like to see the full integration of evaluation into foundation daily practice and routine decision making. Foundations will use evaluation data for decision-making and improving practice. More public sharing both internally and externally and more frank conversation about what worked or didn’t work. I want to expand our ability to share information to inform the fields in which we work and to inform our audiences, such as donors
  • 33. and policymakers. To improve the level of transparency surrounding evaluation, less emphasis on perfection and more on discovery. 3 Foundations will be more transparent about their evaluations and share what they are learning externally. | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 36 37 1. What is the purpose of evaluation at your foundation? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________
  • 34. How do your foundation’s evaluation efforts align with its goals and strategies, if at all? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ How does leadership at your foundation use information from the foundation’s evaluation work, if at all? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ How do your foundation’s evaluation efforts align, or not align, with its organizational culture? _____________________________________________________
  • 35. ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ 2. How does your foundation make decisions about each of the following: How much to budget for evaluation work? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ DISCUSSION QUESTIONS Which costs will be categorized as evaluation costs (e.g., salaries of staff with evaluation responsibilities, third party evaluators, data
  • 36. collection efforts, etc.)? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ 3. How are responsibilities for evaluation work structured at your foundation? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________
  • 37. ___________ How many staff have evaluation-related responsibilities at your foundation? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ What are the evaluation-related job responsibilities of these staff members? On what do they spend their time? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ In which department or area do staff with evaluation-related responsibilities work, and why?
  • 38. _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 38 39 METHODOLOGY SAMPLE Foundations were considered for inclusion in this sample if they: • were based in the United States or Canada; • were an independent foundation, including health conversion foundations, or community foundation as categorized by Foundation Directory Online and CEP’s internal contact management software; • provided $10 million or more in annual giving, according to information provided to CEP from Foundation Center in September 2014 and
  • 39. the Canada Revenue Agency, with help from Philanthropic Foundations Canada; • or, were members of the Center for Evaluation Innovation’s (CEI) Evaluation Roundtable. For foundations that were members of CEI’s Evaluation Roundtable, the foundation’s representative to the Roundtable was included in the sample. For all other foundations, the following criteria were used to determine the most senior person at the foundation who was most likely to have evaluation-related responsibilities: An individual was deemed to be evaluation staff if his/her title included one or more of the following words, according to the foundation’s website: 1. Evaluation 2. Assessment 3. Research 4. Measurement 5. Effectiveness 6. Knowledge 7. Learning 8. Impact 9. Strategy 10. Planning 11. Performance 12. Analysis To determine which evaluation staff member at a foundation was the most senior, the following role hierarchy was used: 1. Senior Vice President 2. Vice President 3. Director
  • 40. 4. Deputy Director 5. Senior Manager 6. Manager 7. Senior Officer 8. Officer 9. Associate If no staff on a foundation’s website had titles or roles that included the above words related to evaluation, the most senior program staff member at the foundation was chosen for inclusion in the sample. Program staff were identified as having titles that included the words “Program” or “Grant,” or mentioned a specific program area (e.g., “Education” or “Environment”). The same role hierarchy described above was used to determine seniority. 4. How, if at all, does your foundation use information from its evaluation work to inform programmatic decisions? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________
  • 41. 5. How are decisions made about with whom evaluation information will be shared: Inside the foundation? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ Outside of the foundation? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________
  • 42. _____________________________________________________ ___________ 6. What changes would you like to see regarding evaluation at your foundation? _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ _____________________________________________________ ___________ What would you hope would happen as a result of these changes? _____________________________________________________ ___________ _____________________________________________________ ___________
  • 43. _____________________________________________________ ___________ | BENCHMARKING FOUNDATION EVALUATION PRACTICES CEP • CEI | 40 41 independent foundations, 13 percent were health conversion foundations. The final eight percent of foundations in our sample included other types of funders that were part of the Evaluation Roundtable, aside from independent or community foundations. The median asset size for foundations in the sample was about $530 million and the median annual giving level was about $28 million. The median number of full-time equivalent staff working at foundations in this study was 25. The number of full-time equivalent staff is based on information purchased from Foundation Center in September 2014. QUANTITATIVE ANALYSIS To analyze the quantitative survey data from foundation leaders, descriptive statistics were examined. Chi-square analyses and independent samples t-tests were also conducted to examine the relationship between foundation size and evaluation structure. An alpha level of 0.05 was used to
  • 44. determine statistical significance for all testing, and effect sizes were examined for all analyses. Because our sample only consisted of 32 community foundations, we were unable to rigorously explore statistical differences between independent and community foundations in this study. QUALITATIVE ANALYSIS Thematic and content analyses were conducted on the responses to the open-ended question, “In five years, what do you hope will have changed for foundations in the collection and/or use of evaluation data or information?” A coding scheme was developed for this item by reading through all responses to recognize recurring ideas, creating categories, and then coding each respondent’s ideas according to the categories. A codebook was created to ensure that different coders would be coding for the same concepts rather than their individual interpretations of the concepts. One coder coded all responses to the question and a second coder coded 15 percent of those responses. At least an 80 percent level of inter-rater agreement was achieved for each code. Selected quotations were included in this publication. These quotations were
  • 45. selected to be representative of the themes seen in the data. Only those individuals who had an e-mail address that could be accessed through the foundation’s website, CEP staff knowledge, or CEI staff knowledge were deemed eligible to receive the survey. In September 2015, 271 foundation staff were initially sent an invitation to complete the survey. Two new members of the Evaluation Roundtable were later added to the sample and sent the survey. Later, 19 individuals were removed from the sample because they did not meet the inclusion criteria. Completed surveys were received from 120 staff members, and partially completed surveys, defined as being at least 50 percent complete, were received from seven staff members. Thus, our final sample of respondents included 127 of the 254 potential respondents, for a response rate of 50 percent. Of the foundation staff who responded to the survey, 58 percent were evaluation staff, 35 percent were program staff, and six percent were staff with a title that did not fall into either of these two categories, based on our previously defined criteria. METHOD The survey was fielded online during a four week period from September to October of 2015. Foundation staff with evaluation-related
  • 46. responsibilities were sent a brief e-mail including a description of the purpose of the survey, a statement of confidentiality, and a link to the survey. These staff were sent up to nine reminder e-mails and received up to one reminder phone call. The survey consisted of 43 items, some of which contained several sub-items. Respondents were asked about a variety of topics, including their role at their foundation and previous experience, their foundation and its evaluation function, their foundation’s specific evaluation practices, and the ways in which information collected through evaluations is used. RESPONSE BIAS Foundations with staff who responded to this survey did not differ from non- respondent organizations by annual asset size, annual giving amount, region of the United States in which the foundation is located, or whether or not the foundation is an independent foundation. Information on assets and giving was purchased from Foundation Center in September 2014. Evaluation staff of foundations that are part of CEI’s Evaluation Roundtable were more likely to respond to the survey than evaluation staff of foundations that are not part of CEI’s Evaluation Roundtable.
  • 47. SAMPLE DEMOGRAPHICS Sixty-seven percent of the foundations represented in our final sample were independent foundations and 25 percent were community foundations. Of the | BENCHMARKING FOUNDATION EVALUATION PRACTICES42 www.effectivephilanthropy.org www.evaluationinnovation.org CEP FUNDERS We are very appreciative of the support that made this work possible. See below for a list of funders. The Leona M. and Harry B. Helmsley Charitable Trust The McKnight Foundation New Hampshire Charitable Foundation New York State Health Foundation Oak Foundation Public Welfare Foundation Richard M. Fairbanks Foundation Saint Luke’s Foundation Sobrato Family Foundation Teagle Foundation Weingart Foundation Wilburforce Foundation William Penn Foundation
  • 48. INDIVIDUAL CONTRIBUTORS Michael Bailin Kevin Bolduc Phil Buchanan Alyse d’Amico John Davidson Robert Eckardt Phil Giudice Tiffany Cooper Gueye Crystal Hayling Paul Heggarty Bob Hughes Barbara Kibbe Latia King Patricia Kozu Kathryn E. Merchant Grace Nicolette Richard Ober Alex Ocasio Grant Oliphant Hilary Pennington Christy Pichel Nadya K. Shmavonian Fay Twersky Jen Vorse Wilka Lynn Perry Wooten $500,000 OR MORE Fund for Shared Insight Robert Wood Johnson Foundation The Rockefeller Foundation The William and Flora Hewlett Foundation $200,000 TO $499,999 The David and Lucile Packard Foundation Ford Foundation
  • 49. W.K. Kellogg Foundation $100,000 TO $199,999 Barr Foundation The James Irvine Foundation The Kresge Foundation Rockefeller Brothers Fund S.D. Bechtel, Jr. Foundation $50,000 TO $99,999 Gordon and Betty Moore Foundation The Wallace Foundation $20,000 TO $49,999 Carnegie Corporation of New York Charles Stewart Mott Foundation The Duke Endowment John D. and Catherine T. MacArthur Foundation Lumina Foundation Surdna Foundation W. Clement and Jessie V. Stone Foundation UP TO $19,999 The Assisi Foundation of Memphis California HealthCare Foundation The Colorado Health Foundation The Columbus Foundation The Commonwealth Fund Doris Duke Charitable Foundation Evelyn and Walter Haas, Jr. Fund The Heinz Endowments Henry Luce Foundation Houston Endowment Kansas Health Foundation
  • 50. | BENCHMARKING FOUNDATION EVALUATION PRACTICES44 675 Massachusetts Avenue 7th Floor Cambridge, MA 02139 (617) 492-0800 131 Steuart Street #501 San Francisco, CA 94105 (415) 391-3070 1625 K Street, NW Suite 1050 Washington, DC 20006 (202) 728-0727 ext.116 www.effectivephilanthropy.org www.evaluationinnovation.org A RESEARCH STUDY BY: STATE OF EVALUATION 2016 Evaluation Practice and Capacity in the Nonprofit Sector
  • 51. #SOE2016 We appreciate the support of the David and Lucile Packard Foundation, the Robert Wood Johnson Foundation, and the Bruner Foundation for funding this research. The findings and conclusions presented in this report are those of the authors alone and do not necessarily reflect the opinions of our funders. This project was informed by seven advisors: Diana Scearce, Ehren Reed, Lester Baxter, Lori Nascimento, Kelci Price, Michelle M. Doty, and J McCray. Thank you for your time and suggestions. The authors acknowledge Innovation Network staff Natalia Goldin, Elidé Flores-Medel, and Laura Lehman for their contributions to this report and Stephanie Darby for the many ways she makes our work possible, and consultant Erin Keely O’Brien for her indispensible survey design advice. We also thank former Innovation Network staff Katherine Haugh, Yuqi Wang, and Smriti Bajracharya for their contributions to project design, and our evaluation colleagues and former State of Evaluation authors Ehren Reed and Ann K. Emery. And finally, we recognize the 1,125 staff of nonprofit organizations throughout the United States who took the time to thoughtfully complete the State of Evaluation 2016 survey. stateofevaluation.org
  • 52. We’re thrilled that you are interested in the state of nonprofit evaluation. We’re more than passionate about the topic—strengthening nonprofit evaluation capacity and practice is the driving force that animates our work, research, and organization mission: For more than 20 years, Innovation Network staff have worked with nonprofits and funders to design, conduct, and build capacity for evaluation. In 2010 we kicked off the State of Evaluation project and reports from 2010 and 2012 build a knowledge base about nonprofit evaluation: how many nonprofits were doing evaluation, what they were doing, and how they were doing it. The State of Evaluation project is the first nationwide project that systematically and repeatedly collects data from nonprofits about their evaluation practices. In 2016 we received survey responses from 1,125 US-based 501(c)3 organizations. State of Evaluation 2016 contains exciting updates, three special topics (evaluation storage systems and software, big data, and pay for performance), and illuminating findings for everyone who works with nonprofit organizations. As evaluators, we’re encouraged that 92% of nonprofit organizations engaged in evaluation in the past year—an all-time high since we began the survey in 2010. And evaluation capacity held steady since 2012: both then and now 28% of nonprofit organizations exhibit a promising mix of characteristics and practices that makes them more likely to meaningfully engage in and benefit from evaluation. Funding for evaluation is also moving in the right
  • 53. direction, with more nonprofits receiving funding for evaluation from at least one source than in the past (in 2016, 92% of organizations that engaged in evaluation received funding from at least one source, compared to 66% in 2012). But for all those advances, some persistent challenges remain. In most organizations, pivotal resources of money and staff time for evaluation are severely limited or nonexistent. In 2016, only 12% of nonprofit organizations spent 5% or more of their organization budgets on evaluation, and only 8% of organizations had evaluation staff (internal or external), representing sizeable decreases. Join us in being part of the solution: support increases of resources for evaluation (both time and money), plan and embed evaluation alongside programs and initiatives, and prioritize data in decision making. We’re more committed than ever to building nonprofit evaluation capacity, and we hope you are too. Measure results. Make informed decisions. Create lasting change. HELLO! Innovation Network is a nonprofit evaluation, research, and consulting firm. We provide knowledge and expertise to help nonprofits and funders learn from their work to improve their results. Johanna Morariu Veena Pankaj
  • 54. Kat Athanasiades Deborah Grodzicki 1 #SOE2016 State of evaluation 2016 Evaluation is the tool that enables nonprofit organizations to define success and measure results in order to create lasting change. Across the social sector, evaluation takes many forms. Some nonprofit organizations conduct randomized control trial studies, and other nonprofits use qualitative designs such as case studies. Since this project launched in 2010, we have seen a slow but positive trend of an increasing percentage of nonprofit organizations engaging in evaluation. In 2016, 92% of nonprofit organizations engaged in evaluation. Promising Capacity Evaluation capacity is the extent to which an organization has the culture, expertise, and resources to continually engage in evaluation (i.e., evaluation as a continuous lifecycle of assessment, learning, and improvement). Criteria for adequate evaluation capacity may differ across organizations. We suggest that promising evaluation capacity is comprised of some internal evaluation capacity, the existence of some foundational evaluation tools, and a practice of at least annually engaging in evaluation. Applying these criteria, 28% of nonprofit organizations exhibit promising evaluation capacity (exactly the same percentage as in 2012).
  • 55. 2016 2012 2010 92% 90% 85% 28% of nonprofit organizations have promising evaluation capacity evaluated their work [n=1,125] evaluated their work [n=535] evaluated their work [n=1,043] 74% 8% 62% 83% 92% of those with logic models or
  • 56. similar documents updated them within the past year of those had a logic model or similar document of those had medium or high internal capacitydid not evaluate their work or did not know if their organi- zation eval- uated its work evaluated their work 2 stateofevaluation.org The nonprofit evaluation report card is a round-up of key indicators about resources, behaviors, and culture. Comparing 2016 to 2012, what’s gotten better, worse, or stayed the same? Nonprofit Evaluation Report Card 92% 92%
  • 57. 85% 28% 58% 12% 8% of nonprofit organizations engaged in evaluation, compared to 90% of nonprofits in 2012 of nonprofit organizations received funding for evaluation from at least one source, compared to 66% of nonprofits in 2012 of nonprofit organizations agree that they need evaluation to know that their approach is working, compared to 68% of nonprofits in 2012 of nonprofit organizations exhibited promising evaluation capacities, compared to 28% of nonprofits in 2012 of nonprofit organizations had a logic model or theory of change, compared to 60% of nonprofits in 2012 of nonprofit organizations spent 5% or more of their budgets on evaluation, compared to 27% of nonprofits in 2012 of nonprofit organizations have evaluation staff primarily responsible for evaluation, compared to 25% of nonprofits in 2012 3
  • 58. #SOE2016 Purpose & Resourcing Nonprofit organizations conduct evaluation for a variety of reasons. Evaluation can help nonprofits better understand the work they are doing and inform strategic adjustments to programming. In 2016, nonprofits continue to prioritize evaluation for the purpose of strengthening future work (95% of respondents identified it as a priority), learning whether objectives were achieved (94%), and learning about outcomes (91%). [n=877] Funding for Evaluation In 2016, 92% of organizations identified at least one source of funding for evaluation and 68% of organizations that received funding identified foundations and philanthropy as the top funding source for evaluation. [n=775] Priorities for Conducting Evaluation Strengthen future work Learn about outcomes Contribute to knowledge in the field
  • 59. Learn whether original objectives were achieved Learn from implementation Respond to funder demands Strengthen public policy 25% 70% 61% 55% 43% 26% 18% 56% 33% 35% 31% 42%
  • 60. Moderate priority 39% 30% 68% 65% 51% 48% 34% Foundation or Philanthropic Contributions Individual Donor Contributions Corporate Charitable Conbritutions Government Grants or Contracts Dues, Fees, or Other Direct Charges Essential
  • 61. priority Organizations funded by philanthropy were more likely to measure outcomes 4 stateofevaluation.org 100% 90% 80% 70% 60% 50% 40% 30% 20% 10%
  • 62. 0% 2010 2012 2016 A majority of organizations spend less than the recommended amount on evaluation. Based on our experience working in the social sector, nonprofit organizations should be allocating 5% to 10% of organization budgets to evaluation. In 2016, 84% of the organizations spent less than the recommended amount on evaluation. [n=740] Organizations spent less on evaluation than in prior years. The gap between how much an organization should spend on evaluation and what they actually spend has increased. [n=775] Budgeting for Evaluation Percent of Annual Budget Spent on Evaluation 0% 2% Less than 2-5% 5-10% 10%or more 4% of organizations 8% of organizations 25% of organizations
  • 63. 43% of organizations 16% of organizations 84% of organizations spend less than 5% on evaluation The percentage of organizations that spend less than 5% of the organization budget on evaluation has increased from 76% in 2010 to 84% in 2016. The percentage of organizations that spend no money on evaluation has increased from 12% in 2010 to 16% in 2016. The percentage of organizations that spend 5% or more of the organization budget has declined from 23% in 2010 to 12% in 2016. 84% 73%76% 12% 27%23% 16% 7% 12% 5
  • 64. #SOE2016 EVALUATION DESIGN & PRACTICE Across the sector, nonprofits are more likely to use quantitative data collection methods. Large organizations (organizations with annual budgets of $5 million or more) continue to engage in a higher percentage of both quantitative and qualitative data collection methods than small organizations (annual budgets of less than $500,000) and medium-sized organizations (annual budgets of $500,000 to $4.99 million). There are a variety of evaluation approaches available to nonprofit organizations. In 2016, over 90% of organizations have selected approaches to understand the difference they are making through their work (outcomes evaluation), how well they are implementing their programs (process/implementation evaluation), and to assess overall performance accountability (performance measurement). Common Evaluation Designs 91% 86% 85% 77% 66% 53% Outcomes Evaluation Performance Measurement Process/
  • 66. 79% 63 % 63% 61 % 94% 50 % 80% 72 % 72% 71 % 78% 21 % 66% 60 % 57% 54 % QUANTITATIVE PRACTICES QUALITATIVE PRACTICES Small organizations Small organizations Medium organizations Medium
  • 67. organizations Large organizations Large organizations Data Collection Methods For this analysis, n values for small organizations ranged from 235 to 241, n values for medium organizations ranged from 370 to 380, and n values for large organizations ranged from 110 to 112. 6 stateofevaluation.org 100% 90% 80% 70% 60% 50% 40% 30%
  • 68. 20% 10% 0% 2010 2012 2016 Logic models and theories of change are common tools used by organizations to guide evaluation design and practice. They demonstrate the connections between the work being done by an organization and the types of changes programs are designed to achieve. Having a logic model or theory of change is seen as a positive organizational attribute and demonstrates an organization’s ability to plan for an evaluation. Since 2010, more large organizations have created or revised their logic model/theory of change at least annually, in comparison to small organizations. Evaluation Planning In 2016, 56% of large organizations created or revised their logic model or theory of change within the past year (up from 45% in 2012, and on par with 2010). [2016: n=112] In 2016, 35% of small organizations created or revised their logic model or theory of change within the past year (up from 30% in 2012, and on par with 2010). [2016: n=242]
  • 69. 56% 34% 30% 35 % 56% 45% 58% 44% All organizations [n=812] Organizations that have a logic model/theory of change Organizations that have created or revised a logic model/theory of change in the past year Organizations that have a logic model/theory of change are more likely to: • agree that evaluation is needed to know that an approach is working
  • 70. • have higher organizational budgets • have more supports for evaluation • have foundation or philanthropic charitable contributions support their evaluation 7 #SOE2016 Staffing Evaluation staffing varies dramatically among nonprofit organizations. One in five large organizations and almost one in ten medium organizations have evaluation staff. Not surprisingly, small organizations are very unlikely to have evaluation staff. While this difference in staff resources is intuitive, organizations without evaluation staff are still frequently expected to collect data, measure their results, and report data to internal and external audiences. Furthermore, organizations with evaluation staff were found to be more likely to exhibit the mix of characteristics and behaviors of promising evaluation capacity. Who is primarily responsible for conducting evaluation work within an organization varies dramatically. Of organizations that evaluate, most nonprofit organizations (63%)
  • 71. report that the executive leadership or program staff were primarily responsible for conducting evaluations. Only 6% of nonprofit organizations report having internal evaluation staff, and 2% indicate that evaluation work was led by an external evaluator. [n=786] Board of Directors Executive Leadership Program Administrative Only 6% of organizations have internal evaluators Evaluation Fundraising & Development Financial 4% 34% 29% 8% 6% 5% 1% 63% of organizations have executives and
  • 72. program staff primarily responsible for evaluation Less than1%of organizations do not have someone responsible for conducting evaluations Large and medium organizations were significantly more likely to have evaluation staff (20% and 8%, respectively), compared to small organizations (2%), p < .001. Significantly more organizations with evaluation staff reported having promising evaluation capacity (60%), compared to organizations without evaluation staff (36%), p < .001. ORGANIZATION SIZE ORGANIZATION CAPACITY Organization Size, Staffing, and Capacity 8 stateofevaluation.org Looking beyond who is primarily responsible for conducting evaluation work, a larger proportion of nonprofits work with external evaluators in some way: in 2016, 27% of nonprofit organizations worked with an external
  • 73. evaluator. Similar to 2010 and 2012, representatives of nonprofit organizations that worked with an external evaluator were strongly positive about their experience. [n=208] Organization size was associated with the likelihood of working with an external evaluator. Almost half of large organizations worked with an external evaluator (49%) compared to 29% of medium organizations and 14% of small organizations. External Evaluators Funding source was also associated with likelihood of working with an external evaluator. Organizations with philanthropy as a funding source were significantly more likely to have worked with external evaluators (30%) compared to organizations that did not have philanthropy as a funding source (20%), p=.004. Satisfaction with working with external evaluators does not differ by organization size 49% 29 % 14 % Small organizations
  • 74. [n=243] Medium organizations [n=381] Large organizations [n=110] 77% Evaluator completed a high quality evaluation 77% Working with an evaluator improved our work 76% Evaluator was a good use of our resources 74% Would hire an evaluator for future projects 9 #SOE2016 AUDIENCE & USE In 2016, 85% of nonprofit organizations reported that executive
  • 75. staff (CEO or ED) were the primary audience for evaluation, a 10% increase from 2012. Over three-quarters of nonprofit organizations also identified the Board of Directors as a primary audience. [n=842] Similar to previous years, most nonprofit organizations use evaluation to measure outcomes, answering the question of What difference did we make? Almost as many nonprofit organizations focus on measuring quality and quantity, answering the questions of How well did we do? and How much did we do? This finding highlights the importance nonprofit organizations place on measuring quality and quantity in addition to outcomes. [n=953] % NOT AN AUDIENCE % PRIMARY AUDIENCE % SECONDARY AUDIENCE 3% 85% 76% 70% 40% 29% 23% 8% 52% 37% 37%
  • 77. Organizations How well did we do? What difference did we make? How much did we do? 90.6% 91.3% 89.6% Evaluation Focus 10 stateofevaluation.org Organizations use evaluation results for internal and external purposes, with 94% of nonprofit organizations using results to report to the Board of Directors (internal) and 93% of nonprofit organizations using results to report to funders (external). [n=869] Nonprofit organizations were asked to indicate the extent to which they use evaluation results to support new initiatives, compare organization performance to a specific goal or benchmark, and/or assess organization performance without comparing results to a specific goal or benchmark. Over 85% of organizations regularly use evaluation results to support the development of new
  • 78. initiatives, demonstrating a growing number of organizations using evaluation to inform and develop their work. [n=851] 52% 91% 57% 94% 82% 93% 86% 73% Plan/Revise Program Initiatives Advocate for a Cause Share Findings with Peers Reporting to Board of Directors Report to Stakeholders
  • 79. Make Allocation Decisions Report to Funders Plan/Revise General Strategies Support development of new Initiatives Compare to specific goal or benchmark Assess performance without comparing to goal or benchmark 44% 51% 43% 42% 33% 38% often often often sometimes sometimes sometimes Evaluation Use 11
  • 80. #SOE2016 Barriers & Supports Time, staff, money, knowledge—these and other factors can heavily influence whether organizations conduct evaluation. Support from organizational leadership, sufficient staff knowledge, skills, and/or tools, and an organization culture in support of evaluation were far and away the most helpful organizational supports for evaluation. Limited staff time, insufficient financial resources, and limited staff expertise in evaluation continue to be the top three barriers organizations face in evaluating their work. These are the same top three challenges named in State of Evaluation 2012 and 2010. Having staff who did not believe in the importance of evaluation has become a much greater barrier than in the past: more than twice the percentage of organizations now report this as a challenge to carrying out evaluation (2012: 12%). BARRIERS [n=830] SUPPORTS [n=777] Insufficient support from organization leadership Limited staff knowledge, skills, and/or tools
  • 81. Having staff who did not believe in the importance of evaluation Limited staff time Disagreements related to data between your organization and a funder Insufficient financial resources Not knowing where or how to find an external evaluator Stakeholders being resistant to data collection 13% 77% 69% 67% 50% 36% 34%
  • 82. 26% 48% 25% 79% 8% 52% 12% 21% Support from organization leadership Sufficient staff knowledge, skills, and/or tools Organization culture in support of evaluation Staff dedicated to
  • 83. evaluation tasks Funder support for evaluation Sufficient funding Working with an external evaluator 12 stateofevaluation.org Most organizations view evaluation favorably. It is a tool that gives organizations the confidence to say whether their approach works. However, a little under half feel too pressured to measure their results. [n=901] Of organizations that evaluate their work, 83% talk with their funders about evaluation. This dialogue with funders is generally deemed useful by nonprofits. [n values range from 661 to 800] Communication and Reporting Our organization needs
  • 84. evaluation to know that our approach is working Data collection interfered with our relationships with clients Most data that is collected in our organization is not used There is too much pressure on our organization to measure our results 85% 73% 69% 57% AGREE DISAGREE DISAGREE DISAGREE of funders are accepting of failure as an opportunity for learning
  • 85. of nonprofits find discussing findings with funders useful of nonprofits’ funders are open to including evaluation results in decisionmaking 49% 32% 15% 44% 42% 54% often never ignore often sometimes sometimes sometimes 93% 74% 69% 13 #SOE2016 Big Data Data Trends Big data is a collection of data, or a dataset, that is very large.
  • 86. A classic example of big data is US Census data. Using US Census data, staff of a nonprofit organization that provided employment services could learn more about employment trends in their community, or monitor changes in employment over time. The ability to work with big data is important, as it enables nonprofits to learn about trends and better target services and other interventions. Thirty-eight percent of organizations are currently using big data, and an additional 9% of organizations report having the ability to use big data but are not currently doing so. [n=820] Big Data: What It Takes Organizations that used big data had more supports for evaluation and larger budgets. 9% 38%41 % of nonprofits do not have the ability to use big data of nonprofits are currently using big data of nonprofits have the ability to use big data but are not currently using it Organizations that use big data had more supports for evaluation (4.06 supports)
  • 87. than those that did not use big data (3.23 supports), p<.001. Organizations that use big data had larger budgets (budgets of approximately $2-2.49 million, mean=5.56) than those that did not use big data (budgets of approximately $1.5-1.99 million, mean=4.13), p=.003. mean=3.23 Do not use big data Do not use big data mean=4.06 Use big data Use big data mean=4.13 mean=5.56 $2.0M - 2.49M avg budget $1.5-1.99M avg budget 14
  • 88. stateofevaluation.org Pay for performance (PFP) is when a nonprofit is compensated based on a certain outcome being achieved. Pay for performance funding requires a nonprofit to track information over time about participant outcomes. Often this tracking will need to continue after the program has concluded. For example, in a job training program, participant data would likely need to be tracked for months or years after program participation to be able to report on outcomes such as employment or continued reemployment. Pay for performance arrangements have grown in popularity as a way to reward service providers for affecting change (compared to just providing services). Currently, 13% of organizations receive pay for performance funding, and an additional 35% of organizations report having the ability to meet pay for performance data requirements (but are not currently receiving pay for performance funding). [n=833] Pay for Performance: What It Takes 35% 13% 48% of nonprofits have the ability or currently receive funding through pay for performance arrangements
  • 89. of nonprofits receive PFP funding of nonprofits have the ability but don’t receive PFP funding Organizations that received pay for performance funding had more supports for evaluation and larger budgets. 43% of nonprofits do not have the ability to collect data sufficient to participate in PFP funding Organizations that receive pay for performance (PFP) funding had more supports for evaluation (4.26 supports) than those that did not receive PFP funding (3.50 supports), p<.001. Organizations that receive pay for performance (PFP) funding had larger budgets (budgets of approximately $3.5-3.99 million, mean=8.34) than those that did not receive PFP funding (budgets of approximately $1.5-1.99 million, mean=4.14), p<.001. mean=3.50 Do not receive PFP Do not receive PFP mean=4.26 Receive PFP Receive PFP
  • 90. mean=4.14 mean=8.34 $3.5M – 3.99M avg budget $1.5M – 1.99M avg budget Pay for Performance 15 #SOE2016 Data storage systems & Software Nonprofits store their evaluation data in many different ways, both low- and high-tech. Overall, 92% of nonprofit organizations that engaged in evaluation in 2016 stored evaluation data in some type of storage system. [n=794] In most organizations, data that can be used for evaluation will come from multiple places. Top data storage methods include spreadsheet software (92%), word processing software (82%), and paper copies/hard copies (82%). Less frequently used methods include survey software (60%) and database solutions (58%). [n=727]
  • 91. 55% of nonprofit organizations are using 4 or more data storage methods. Multiple methods likely represent multiple data collection methods and data types, which could indicate a well-rounded evaluation approach. Having multiple data storage methods makes data analysis and tracking more challenging. The vast majority (85%) of nonprofit organizations are using 3 or more data storage methods. of nonprofit organizations store their evaluation data92% 92% 3% 11 % 30% 34% 21% 82% 60% 58%82% Spreadsheet software 1 method Word processing
  • 92. software 2 methods Paper copies or hard copies 3 methods Survey software 4 methods Database 5 methods 16 stateofevaluation.org Custom database solutions are the frontrunner when it comes to satisfying the needs of nonprofits. These databases may better meet the data storage needs of nonprofits, but also require a level of technical proficiency that may be out of reach for some. On average, nonprofits are satisfied almost half the time. There is room for improvement in matching data storage systems to the needs and resources of nonprofits.
  • 93. Over half of nonprofit organizations (58%) that evaluated in 2016 used databases to store their evaluation data. Databases were not the most common method, but were still relatively commonplace, which is encouraging. Often, there is a higher barrier to entry to working with databases as they require set-up, some expertise, and maintenance. Databases Satisfaction with Data Storage Systems of organizations that store evaluation data use a custom database, of these 395 organizations: 58% DISSATISFIED SATISFIED 18% 47% 55% 47% 45% 45% 44% 16% 18%
  • 94. 19% 18% 19% AVERAGE Custom database solution Survey software Spreadsheet software Word processing software Paper copies or hard copies 14% 5% 4% Use Salesforce Use CTK Use ETO 17 #SOE2016 Evaluation in the social sector Across the social sector, funders work hand in hand with
  • 95. grantee partners to provide services, implement initiatives, track progress, and make an impact. Evaluation in the nonprofit sector has a symbiotic relationship with evaluation in the philanthropic sector. Decisions made by funders affect which nonprofits engage in evaluation, how evaluation is staffed, how evaluation is funded, which types of evaluation are conducted, and other factors. Evaluation is more likely to be a staffed responsibility in funding institutions compared with nonprofits. For both funding and nonprofit organizations, reporting to the Board of Directors is the most prevalent use of evaluation data (94% of nonprofits, 87% of funders). The use of evaluation data to plan/revise programs or initiatives and to plan/revise general strategies was also a relatively high priority for both funders and nonprofits. Where funders and nonprofits differ is using data with each other: 93% of nonprofits use their data to report to funders and 45% of funders use their data to report to grantees. Use of Evaluation Data NONPROFITS FUNDERS 94% 87% 45% 51% 65% 20%
  • 96. 49% 93% 91% 57% 86% 52% Report to Board of Directors Report to funders Report to grantees Plan/revise program or initiatives Plan/revise general strategies Advocacy or influence Share findings with peers 75% 50% 92%
  • 97. 8% of funders evaluate their work of funders have evaluation staff of nonprofits evaluate their work of nonprofits have evaluation staff Source: McCray (2014) Is Grantmaking Getting Smarter? Source: Buteau & Coffman (2016) Benchmarking Foundation Evaluation Practices. The Center for Effective Philanthropy and the Center for Evaluation Innovation surveyed 127 individuals who were the most senior evaluation or program staff at their foundations. Foundations included in the sample were US-based independent foundations that provided $10 million or more in annual giving or were members of the Center for Evaluation Innovation’s Evaluation Roundtable. Only foundations that were known to have staff with evaluation-related responsibilities were included in the sample; therefore, results are not generalizable. Source: McCray (2014) Is Grantmaking Getting Smarter? 18 stateofevaluation.org Staff within funding and nonprofit organizations both face
  • 98. barriers to evaluation—but the barriers are different. Evaluation barriers within nonprofits are mostly due to prioritization and resource constraints, while barriers within funding institutions speak to challenges capturing the interest and involvement of program staff. The audience for an evaluation may play a large role in decisions such as the type of evaluation that is conducted, the focus of the data that is collected, or the amount of resources that are put into the evaluation. The top three audiences for funders are internal staff: executive staff, other internal staff (e.g., program staff), and the Board of Directors. Internal audiences are two of the top three audiences for nonprofits, but for nonprofits, the external funding audience is also a top priority. Peer organizations are a markedly lower priority for both nonprofits and funders, which suggests evaluation data and findings are primarily used for internal purposes and are not shared to benefit other like- minded organizations. Audience for Evaluation Evaluation Barriers NONPROFITS’ AUDIENCE RANKINGS FUNDERS’ AUDIENCE RANKINGS 1 1 2 2 3 3 4 4
  • 99. 5 5 Board of Directors Board of Directors Internal executive staff Internal executive staff Funders GranteesOther internal staff Other internal staff Peer organizations Peer organizations 79% Limited staff time 91% Program staff’s time 48% Limited staff knowledge, tools, and/or resources 50% Program staff’s attitude towards evaluation 52% Insufficient financial resources 71% Program staff’s comfort in interpreting/ using data 25% Having staff who did not believe in the
  • 100. importance of evaluation 40% Program staff’s lack of involvement in shaping the evaluations conducted Source: Buteau & Coffman (2016) Benchmarking Foundation Evaluation Practices Source: Buteau & Coffman (2016) Benchmarking Foundation Evaluation Practices NONPROFIT CHALLENGES FUNDER CHALLENGES 19 #SOE2016 Purpose. The State of Evaluation project is the first nationwide project that systematically and repeatedly collects data from US nonprofits about their evaluation practices. Survey results are intended to build understanding for nonprofits, to compare their evaluation practices to their peers; for donors and funders, to better understand how they can support evaluation practice throughout the sector; and for evaluators, to have more context about the existing evaluation practices and capacities of their nonprofit clients. Sample. The State of Evaluation 2016 sampling frame is 501(c)3 organizations in the US that updated their IRS Form 990 in 2013 or more recently and provided an email address in the IRS Form 990. Qualified organizations were identified through a GuideStar dataset. We sent an
  • 101. invitation to participate in the survey to one unique email address for each of the 37,440 organizations in the dataset. With 1,125 responses, our response rate is 3.0%. The adjusted response rate for the survey is 8.35% (removing from the denominator bounced invitations, unopened invitations, and individuals who had previously opted out from SurveyMonkey). The survey was available online from June 13, 2016, through August 6, 2016, and six reminder messages were sent during that time. Organizations were asked to answer questions based on their experiences in 2015. Analysis. Once the survey closed, the data was cleaned and many relationships between responses were tested for significance in SPSS. In State of Evaluation 2016, we note which findings are statistically significant and the significance level. Not all of the findings contained herein are statistically significant, but we believe that they have practical value to funders, nonprofits, and evaluators and so are included in these results. Representation. We compared organizations that participated in SOE 2016 with The Urban Institute’s Nonprofit Sector in Brief 2015. Our random sample had less representation from organizations with budgets under $499,999 than the sector as a whole, and more responses from organizations with budgets of $500,000 and larger. One third (33%) of our sample was from the South (14,314 organizations), 26% from the West (11,204), 18% from the Midwest (7,830), and 23% from the Northeast (9,866). METHODOLOGY Responses were mostly received from executive staff (71% of 2016 survey
  • 102. respondents), followed by fundraising and development staff (11%), administrative staff (4%), program staff (4%), evaluation staff (3.5%), Board Members (3%), financial staff (1%), and other (3%). The top three sources of funding for responding organizations were individual donor charitable contributions (27% of organizations cited these as a funding source), government grants and contracts (26%), and foundation or philanthropic charitable contributions (20%). 50% 40% 30% 20% 10% 0% Less than $100,000 $100,000- $499,999 $1m -$4.9m SOE Urban
  • 103. $500,000- $999,999 $5m or more 6.4% 29.5% 27.4% 37.0% 21.6% 10.6% 30.1% 14.4% 11.9% 8.6% Source: McKeever (2015) The Nonprofit Sector in Brief 2015: Public Charities, Giving, and Volunteering 20 stateofevaluation.org ABOUT INNOVATION NETWORK
  • 104. Innovation Network is a nonprofit evaluation, research, and consulting firm. We provide knowledge and expertise to help nonprofits and funders learn from their work to improve their results. ABOUT THE AUTHORS Johanna Morariu, Director, Innovation Network Johanna is the Co-Director of Innovation Network and provides organizational leadership, project leadership, and evaluation expertise to funder and nonprofit partners. She has over a decade of experience in advocacy evaluation, including advocacy field evaluation, campaign evaluation, policy advocacy evaluation, organizing and social movements evaluation, and leadership development evaluation. Kat Athanasiades, Senior Associate, Innovation Network Kat leads and supports many of Innovation Network’s engagements across the evaluation spectrum—from evaluation planning to using analysis to advise organizations on strategy. She brings skills of evaluation design and implementation, advocacy evaluation, evaluation facilitation and training, data collection tool design, qualitative and quantitative data analysis, and other technical assistance provision. Veena Pankaj, Director, Innovation Network Veena is the Co-Director of Innovation Network and has more than 15 years of experience navigating organizations through the evaluation design and
  • 105. implementation process. She has an array of experience in the areas of health promotion, advocacy evaluation and health equity. Veena leads many of the organization’s consulting projects and works closely with foundations and nonprofits to answer questions about program design, implementation, and impact. Deborah Grodzicki PhD, Senior Associate, Innovation Network Deborah provides project leadership, backstopping, and staff management in her role as Senior Associate at Innovation Network. With nearly a decade of experience in designing, implementing, and managing evaluations, Deborah brings skills of evaluation theory and design, advocacy evaluation, quantitative and qualitative methodology, and capacity building. October 2016 This work is licensed under the Creative Commons BY- NC-SA License. It is attributed to Innovation Network, Inc. 202-728-0727 www.innonet.org [email protected] @InnoNet_Eval Evaluation and Program Planning 58 (2016) 208–215
  • 106. Original full length article Barriers and facilitators to evaluation of health policies and programs: Policymaker and researcher perspectives Carmen Huckel Schneidera,b,*, Andrew J. Milatc,d, Gabriel Moorea a Menzies Centre for Health Policy, University of Sydney, Level 6 The Hub, Charles Perkins Centre D17, The University of Sydney NSW 2006, Australia b The Sax Institute, Level 13, Building 10, 235 Jones Street, Ultimo NSW 2007, Australia c New South Wales Ministry of Health, 73 Miller St, North Sydney NSW 2060, Australia d Sydney Medical School, University of Sydney, Edward Ford Building (A27), Fisher Road, University of Sydney NSW 2006, Australia A R T I C L E I N F O Article history: Received 9 December 2015 Received in revised form 28 April 2016 Accepted 17 June 2016 Available online 11 July 2016 Keywords: Policy evaluation Health policy Public health administration Public health A B S T R A C T
  • 107. Our research sought to identify the barriers and facilitators experienced by policymakers and evaluation researchers in the critical early stages of establishing an evaluation of a policy or program. We sought to determine the immediate barriers experienced at the point of initiating or commissioning evaluations and how these relate to broader system factors previously identified in the literature. We undertook 17 semi-structured interviews with a purposive sample of senior policymakers (n = 9) and senior evaluation researchers (n = 8) in Australia. Six themes were consistently raised by participants: political influence, funding, timeframes, a ‘culture of evaluation’, caution over anticipated results, and skills of policy agency staff. Participants also reflected on the dynamics of policy-researcher relationships including different motivations, physical and conceptual separation of the policy and researcher worlds, intellectual property concerns, and trust. We found that political and system factors act as macro level barriers to good evaluation practice that are manifested as time and funding constraints and contribute to organisational cultures that can come to fear evaluation. These factors then fed into meso and micro level factors. The dynamics of policy- researcher relationship provide a further challenge to evaluating government policies and programs. ã 2016 Elsevier Ltd. All rights reserved. Contents lists available at ScienceDirect Evaluation and Program Planning
  • 108. journal homepage: www.elsevier.com/locat e/e valprogplan 1. Introduction Previous research suggests that the number of public sector policies and programs that are evaluated in industrialized nations is low (Oxman, Bjorndal, & Becerra-Posada, 2010) and that policy makers, who we define as those working in the public sector to plan, design, implement and oversee public programming, face significant hurdles in establishing and implementing policy and program evaluation (Evans, Snooks, & Howson, 2013). These challenges sit within what is known as the ‘messy’ business of public policy; including political compromise, collaboration and coordination amongst multiple stakeholders and strategic and tactical decision-making, that have long been recognised in the * Corresponding author. Present address: Menzies Centre for Health Policy, University of Sydney Level 6 The Hub, Charles Perkins Centre D17, The University of Sydney NSW 2006, Australia. E-mail addresses: [email protected] (C. Huckel Schneider), [email protected] (A.J. Milat), [email protected] (G. Moore). http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011 0149-7189/ã 2016 Elsevier Ltd. All rights reserved. evaluation policy literature (Head & Alford, 2015; Parsons, 2002; House, 2004). There is however notable potential to utilise evaluation to draw lessons about public health policies’ and programs’ effectiveness,
  • 109. applicability and efficiency, all of which can contribute to a better evidence-base for informed decision making. This is particularly the case when implementation of a policy or program is studied as experimentation in practice (Petticrew et al., 2005; Craig, Cooper, Gunnell, Gunnell, & Gunnell, 2012; Banks, 2009; Palfrey, Thomas, & Phillips, 2012; Patton, 1997). There is also a growing expectation that evaluation should be used by policymakers to inform public health policies and programs (Oxman et al., 2010; Chalmers, 2003; NSW Government, 2013; Patton, 1997) and in theory, use of evaluation should have synergies with principles of new public management style public service, such as those widely adopted in countries such as United Kingdom, Australia, Scandinavia and North America (Dahler-Larsen, 2009; Johnston, 2000). There is a wide range of literature on policy and program evaluation, including considerable debate on appropriate evalua- tion methodologies, (DeGroff & Cargo, 2009; Learmonth, 2000; Rychetnik, 2010; Smith & Petticrew, 2010; White, 2010) and the http://crossmark.crossref.org/dialog/?doi=10.1016/j.evalprogpla n.2016.06.011&domain=pdf mailto:[email protected] mailto:[email protected] mailto:[email protected] mailto:[email protected] http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011
  • 110. http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011 http://www.sciencedirect.com/science/journal/01497189 www.elsevier.com/locate/evalprogplan C. Huckel Schneider et al. / Evaluation and Program Planning 58 (2016) 208–215 209 challenges of evaluation research uptake (Lomas, 2000; Dobbins et al., 2009; Chelimsky, 1987; Weiss, 1970). Amongst this literature there is awareness that evaluation of public health policies and programs is complex, highly context dependent, plagued by ideological debate and subject to ongoing challenges of research- er—policy collaboration (Flitcroft, Gillespie, Salkeld, Carter, & Trevena, 2011; Ross, Lavis, & Rodriguez, 2003; Haynes et al., 2012; Petticrew, 2013). There are however gaps in the literature about barriers and facilitators to undertaking quality evaluation at the level of public service where evaluations tend to be initiated and commissioned, namely in middle management and by program directors. There are also few studies that deliberately seek to gather, compare and analyse perceived barriers from both policy makers and evaluation researchers operating within the same policy space. In this paper, we aim to explore the early stages of policy and program evaluation, including the decision to evaluate, preparing for evaluation, commissioning evaluations and how policymakers and evaluation researchers work together for the purpose of high quality evaluation undertaken as part of evaluation research. Our
  • 111. primary aim is to: � Identify barriers and facilitators to evaluation at the initiation and commissioning stages, from within both the policy making as well as evaluation research environments. To achieve this overarching aim, we address the following: � How evaluations of health policies and programs are planned, commissioned and undertaken in Australia; � The barriers and facilitators experienced by policymakers and evaluation researchers in the early stages of establishing an evaluation of a policy or program and in undertaking commis- sioned evaluation work; and � Initiatives that may assist government to overcome barriers to effective evaluation. The study was designed and undertaken at the Sax Institute, an independent research and knowledge translation institute located in Sydney Australia, to inform the development of programs to assist and improve evaluation practice in health policy. Box 1. Interview schedule questions. Questions for policy makers 1. Could you please describe how policies and programs are eva 2. What factors influence decisions to undertake evaluations of p 3. What factors act as barriers or facilitators to undertaking qualit 4. What is the best way to engage with evaluators? 5. Now, thinking about commissioning external evaluations, what
  • 112. evaluator? Questions for evaluation researchers 1. From your experience could you please describe how governm 2. What is the quality of such evaluations? 3. What are the barriers and facilitators to undertaking good eva programs? 4. What is the best way for policy makers to engage researchers/ 5. What can policy makers do to effectively commission an evalu 2. Methods We undertook 17 semi-structured interviews with a purposive sample of policymakers (n = 9) and evaluation researchers (n = 8) in Australia. Policymakers had a minimum of 10 years of experience in a health policy or program delivery role in either state or federal government. They were also currently employed in a health related government department and had previously been involved with the evaluation of government policies and programs. Evaluation researchers had a minimum of eight years of experience in undertaking evaluations of health or health related policies or programs run by state or federal level government agencies in Australia and were senior academics at a university. Separate but similar interview schedules were developed for policymakers and evaluation researchers. The interviews com- prised five open-ended questions (see Box 1). The study was approved by the Humanities Human Research Ethics Committee,
  • 113. University of Sydney, Australia. Informed written consent was obtained from all participants. Policymakers were asked to reflect on their experiences of how public health policies and programs are evaluated, decisions about whether or not to evaluate them, ways in which policymakers engage or work with evaluation researchers and what would help them to commission an evaluation with an external evaluator. Evaluation researchers were asked about their experiences of how government health policies and programs are evaluated in Australia, working with government agencies and asked about what would facilitate the process of undertaking evaluations of public policies and programs. Sixteen interviews were voice-recorded and transcribed and one was documented with detailed notes. We used thematic analysis to draw conclusions from the texts. We used thematic analysis as a realist method—assuming that by-and-large the responses of the interviewees to questions posed reflect the experiences, meaning and reality of the participants. Coding and identification of themes broadly followed the method of Braun and Clarke (2006) and proceeded in four steps. First, texts were de-identified and applied with markers to categorise policymakers (PM1-9) or evaluation researchers (ER1- 8). Second, author CHS inductively coded eight texts (four from each category) using a free coding process. Codes were collated and merged within parent codes where possible. Authors CHS and
  • 114. GM luated in your agency? olicies or programs (in your experience)? y evaluations? would help you to commission an evaluation with an external ent health policies and programs are evaluated? luations of health or health-related government policies and evaluators to evaluate government policies and programs? ation? 210 C. Huckel Schneider et al. / Evaluation and Program Planning 58 (2016) 208–215 then grouped parent codes into categories and compiled them in a proto-coding manual. Third, nine transcripts were distributed amongst all authors, CHS, AM, GM to test-code using the proto- coding manual. Where necessary codes were supplemented and/or merged and re-categorised. The coding manual was finalised comprising five main categories, each with several sub- categories of codes. 1) Planning and commissioning evaluations of policies and programs 2) Relationships between policy makers and evaluation research- ers
  • 115. 3) Conducting evaluations of policies and programs 4) Utility of evaluation 5) Current evaluation practice Fourth, author CHS re-coded all texts using the coding manual. All authors then reviewed each code to interpret meaning and emerging themes. 3. Results Evaluations of government programs in Australia are generally initiated and commissioned at the level of the program or policy to be evaluated � though this may occur within higher level evaluation frameworks. Program managers or directors of a programmatic unit may take direct responsibility for setting initial scope requirements of an evaluation and establishing a committee structure to oversee competitive or selective–tender- ing to an external evaluator. In some government agencies there may be specific programmatic units that perform internal evaluations or facilitate the process of commissioning evaluations and establishing partnerships with research institutions. Program level managers and staff usually take on the role of facilitating access to data, site visits and other necessary permissions for evaluation work to be undertaken. Models of engagement between evaluation researchers and policy agencies vary from case to case. Respondents in our interviews described a range of ‘contracting out’ models, such as competitive and selective tender, pre-qualification/panel schemes as well as ‘collaborative engagement’ models such as co-production and researcher initiated evaluation. A discussion of these types of models of engagement is reported on elsewhere [Reference removed in blinded manuscript].
  • 116. Here, we discuss results of the interviews that shed light on our main aim: identifying barriers and facilitators to evaluation from within both the policy making as well as evaluation research environments. A broad array of themes emerged from the interviews that related to experiences that facilitated or acted as a barrier to evaluation. Of these, six were consistently raised as particularly important by a high number of study participants: timeframes (15 participants), political influence (12 participants), funding (10 participants), having a ‘culture of evaluation’ (9 participants), caution over anticipated evaluation outcomes (8 participants) and skills and ability of policy agency staff (8 participants). 3.1. Timeframes Evaluation timeframes was the issue raised by more study participants than any other. It acted as a barrier or facilitator in three ways: First, evaluations are often initiated late in the policy or program implementation process. This can have several conse- quences: funding to undertake evaluation can simply run out; or the situation arises that no adequate data has been collected at an early stage of implementation that can act as a baseline. There may also simply be inadequate time to undertake data collection or conduct analysis for rigorous evaluation. Many interviewees found that the consequence of starting late was to undertake small evaluations limited in scope, often evaluating processes rather than outcome. One interviewee reported on limiting the evalua- tion of a complex intervention on childhood well-being to a review
  • 117. of relationships between stakeholders in the process. “One of the things that makes it difficult is when something has been implemented and then ‘oh, let’s do an evaluation’. So you don’t have baseline data, it hasn’t been set up to exclude all sorts of other factors that may have brought about change” (ER7) Second, timeframes are often too short to conduct an evaluation. The vast majority of public health policies and programs will require years to have a measureable impact on health outcomes. For example, one interviewee had been involved in evaluation reporting for obesity prevention programs that had to focus on logic of assumptions to conduct the evaluation. “But timeframe is another problem I think. Often, the process as a new program gets rolled out and it’s expected to have massive impact somehow within a 12 month period and it’s just not a realistic timeframe to be able to demonstrate change” (ER3) The time required to design a complex evaluation and undertake complex analysis is also beyond what is feasible in many of the short timeframes of evaluations. “I mean if you want to do a proper evaluation you can’t have a turnaround time of one month or something like six weeks. You know if you’re going to do proper focus groups or you know, design questionnaires . . . and I notice a lot of times they want this quick turnaround time” (PM1) Furthermore, timeframes further limit who might be engaged
  • 118. to undertake evaluations. Academic researchers are less likely to be able to dedicate time to an evaluation that needs to be done quickly, especially if requiring the recruitment of staff. “But one of the barriers for academic groups being part of it is the timeframes. So you know you’ll normally sort of have to put in your tender bid very quickly and then you normally have to start work on it instantaneously, and academic groups are not always able to be quite so nimble as that” (ER5) Third, policymakers reported general lack of time between their everyday tasks to dedicate to careful consideration of evaluation planning, including program objectives and evaluation questions. “But because of the kind of immediacy of the job, and even if the intellectual capacity is there to really engage with the process there often isn’t the time. You know the writing of briefs, and advising people, and doing this and that. I know it is about getting the right balance . . . but it limits the time available to sit down and really nut out your questions” (PM4) 3.2. Political influence Many participants had experienced political imperatives as a major barrier to conducting evaluations; often citing it as a
  • 119. more significant issue than funding, timing or the knowledge and skills of agency staff. “I think we tend to complain that there aren’t resources and that there isn’t enough people around with the ability to do it. To me those are important but they’re much less significant than the political cloud that surrounds the whole business” (ER1). Political imperatives included those situations where political preferences influenced whether an evaluation goes ahead; pressure from higher up to have a policy or program succeed or C. Huckel Schneider et al. / Evaluation and Program Planning 58 (2016) 208–215 211 not; and any decisions based on the sensitivity of a policy or program. Certain issue areas were more likely to be subject to ideological debate and therefore political pressure than others; as was the case for certain approaches. In public health for example, debate between community vs individual responsibility as a basic paradigm that underlies policy decisions can make one issue area or policy response more politically sensitive than another. In some cases public health programs and policies can proceed at arms- length to such debates, in other cases they can be heavily influenced or weighed down by political imperatives and uncertainty (Head & Alford, 2015). Political influence was seen to be stronger at more senior levels
  • 120. of government agencies. Election cycles reduced timeframes for action and subsequently evaluation and reporting. The time required for a policy or program to actually show an effect may thus lie outside of the available timeframe required. “Policymakers obviously work with a lot of constraints and political constraints. Even when it is personally recognised that they really need is five years to see if something is making a difference they are working in an environment that doesn’t allow for that” (ER3) Political ideology and experiences of highest level public servants and elected officials were seen to trickle down into government agencies determining what programs are prioritised or placed under scrutiny. “I wouldn’t do this—but it’s possible that if something’s coming under attack and . . . or is contentious and either the government of the minister or the department thinks that it’s a valuable program that they’ll construct an evaluation or research around it in such a way that it will deliver the answers they’re looking for” (PM7) Political imperatives not only influenced if, when and with what funding a policy or program might be evaluated, but indirectly determined the focus of evaluations. For example, there may be a preference for measuring cost-utility rather than population health outcomes. In some situations, it may be more important to be immediately seen to be ‘doing something’ rather than having an actual effect.
  • 121. “If it is a politically motivated process, then the aim is to be recognised for doing the program or having the policy. The actual ‘Was it really effective?’ . . . I don’t think those questions are always required or wanted” (ER2) Both evaluation researchers and policymakers recognised the frustration than can result when evaluation results and recom- mendations formulated from them are not acted upon. 3.3. Funding Inadequate funding for evaluation was generally seen to compromise the quality of evaluations. It was also linked to evaluations that were initiated late in the implementation phase and sometimes after funding had run out. “So barriers—always money . . . We used to occasionally find out some of the more complex analyses, and if you don’t have money to do that then it obviously stymies your ability to do some of the more complex stuff” (PM6) Interviewees did not however experience a lack of funding for policies and programs per se, but rather that too little funding was allocated specifically for evaluation in the policy and program design phase. “They had money for the implementation of it but no money had been set aside for the evaluation for example. It was how the money was packaged” (ER5) Having evaluation written in to the program design from the beginning was seen to increase the likelihood that adequate
  • 122. funding for the evaluation would be available. “If it’s written in that the program’s going to have an evaluation, then that would help, and if there’s resources identified to do it, both money and people—and skills so the whole set” (PM9) Lack of funding also influenced whether an evaluation would be undertaken with external evaluators or undertaken in-house. In situations where agency resources are scarce and there were fewer opportunities for staff to develop skills, the quality of evaluations was seen to be compromised, or at least limited in methodological scope. 3.4. Culture of evaluation A culture of evaluation with a government agency refers to a value being placed on evaluation in all decision making processes such as priority setting, program design and implementation, and funding (Dobbins et al., 2009). Both policymakers and evaluation researchers connected the extent to which such values were present within a government agency as a key facilitator to good evaluation practice. However, it was an area commonly identified for potential improvement. “ . . . so I really think it just depends on whether people are focused on doing them or not” (ER1)
  • 123. A culture of evaluation was seen to require a high degree of motivation to evaluate policies and programs, and opportunities to share experiences, positive and negative, with colleagues. It was also linked with having high level championing of evaluation and a staff mix with skills, experience and motivation to integrate evaluation into standard practice. “I mean it’s probably something to do with people that are in decision-making roles I expect � how much of an emphasis they place on evaluation and then how much they know about it” (PM9) A culture of evaluation was seen to have the potential to counteract some of the nervousness surrounding evaluation. A poor evaluation culture was characterised by a lack of support for government agency staff that were confused about how to conduct an evaluation. 3.5. Caution over anticipated outcomes Several policymakers and evaluation researchers recalled experiences where fear or worry of possible negative evaluation results (ie. results showing that a program failed to have an effect, or had made a situation worse) acted as a barrier to evaluation. This fear appeared to act as a more significant barrier for large scale programs or policies. “not actually wanting to deal with the unpalatable truth then obviously that’s a major barrier” (ER2)
  • 124. In many cases, evaluations are undertaken specifically to demonstrate positive results and participants reported that briefs for external evaluators were written in a way that implies a certain result is expected. “Of course there are some agendas behind evaluations, where the aim is to snow success rather than learn” (PM2) This can influence the eventual mode of engagement with external researchers; private consulting firms were often seen as more likely to report positively on policies and programs, in a way that is less of a risk in terms of consequences of possible negative evaluation outcomes. It also led to situations where there was an understanding that results of evaluation would not be published; and in such cases there was a concern that quality of evaluations 212 C. Huckel Schneider et al. / Evaluation and Program Planning 58 (2016) 208–215 was generally lower due to the absence of expectations of public scrutiny. “Within government, and also most organisations, people would like to see very positive stories coming out, and tend to be quite wary of independent groups taking a look at what they are doing. Because you’ll never know what they’ll say” (ER7) Interviewees postulated that the underlying cause of this degree of caution was a need to demonstrate positive outcomes