A presentation by Nalini Takeshwar as part of the Cohort Research for Programme and Policy panel discussion at the International Symposium on Cohort and Longitudinal Studies in Developing Contexts, UNICEF Office of Research - Innocenti, Florence, Italy 13-15 October 2014
Measuring Success in Patient Advocacy InitiativesCharityNav
In an increasingly challenging donor environment, funders want more meaning reporting of success and outcomes by nonprofits. This webinar provides insights and knowledge that can mean the difference between scaling up - or dialing down - key initiatives.
Effectiveness is often referred to as doing the right thing, while efficiency is doing things right. Effectiveness is an external measure of process output or quality.
This talk explores the topic of engagement and its link to performance, and they key ingredients required to engage people and build a culture of high performance and high engagement
• The link between engagement and culture
• Dimensions of a high performance culture
• Engagement Meta-Studies – impact on people, performance and business metrics
• How engaged are we – the bad news and impact of the engagement deficit?
• Case studies linking engagement initiatives to performance
• Engagement and the individual
• Barriers to engagement
• 4 critical enablers to engagement
• What really motivates people?
• 3 E’s of leadership to build engaging high performance workplaces
Jessica Weitzel presented “Finding and Incorporating Research to Increase Program Effectiveness” the training was sponsored by the After-School Network of Western New York [@asnwny] and held at the United Way of Buffalo and Erie County [@uwbec].
A presentation by Nalini Takeshwar as part of the Cohort Research for Programme and Policy panel discussion at the International Symposium on Cohort and Longitudinal Studies in Developing Contexts, UNICEF Office of Research - Innocenti, Florence, Italy 13-15 October 2014
Measuring Success in Patient Advocacy InitiativesCharityNav
In an increasingly challenging donor environment, funders want more meaning reporting of success and outcomes by nonprofits. This webinar provides insights and knowledge that can mean the difference between scaling up - or dialing down - key initiatives.
Effectiveness is often referred to as doing the right thing, while efficiency is doing things right. Effectiveness is an external measure of process output or quality.
This talk explores the topic of engagement and its link to performance, and they key ingredients required to engage people and build a culture of high performance and high engagement
• The link between engagement and culture
• Dimensions of a high performance culture
• Engagement Meta-Studies – impact on people, performance and business metrics
• How engaged are we – the bad news and impact of the engagement deficit?
• Case studies linking engagement initiatives to performance
• Engagement and the individual
• Barriers to engagement
• 4 critical enablers to engagement
• What really motivates people?
• 3 E’s of leadership to build engaging high performance workplaces
Jessica Weitzel presented “Finding and Incorporating Research to Increase Program Effectiveness” the training was sponsored by the After-School Network of Western New York [@asnwny] and held at the United Way of Buffalo and Erie County [@uwbec].
New Frameworks for Measuring Capacity and Assessing PerformanceTCC Group
If we start with the assumption that — in order to improve our social sector as a whole — those who do the work to strengthen our communities (the nonprofits) are equally as critical as those responsible for providing the resources for the work to get done (the foundations), then why wouldn’t we expect all social sector actors to build their capacity? How do we know when our grantees and our foundations are becoming more effective and impactful as a result of our capacity investments, organizational development efforts and technical assistance? What does a high performing organization or foundation look like? And can we measure that?
This presentation, provided during the Grantmakers for Effective Organizations 2016 National Conference in Minneapolis, reviews and demonstrates existing resources for assessing nonprofit and foundation capacity and effectiveness. Speakers introduced the pros and cons of a variety of rubrics in use in the field and offered guidance on how funders decide on the right fit for the desired purpose. Grantmaker peers also shared how they used different frameworks and tools to assess individual nonprofits and grantee cohorts. Session participants left with increased awareness of the importance of the facilitator’s role in interpreting data gleaned from assessments and of the data collection methods most appropriate for their organization.
Anne is Deputy Head of the Measurement and Evaluation at New Philanthropy Capital (NCP) and helps charities and funders to measure and communicate their impact. Her role includes developing tools and approaches for improving impact measurement for a wide range of organisations.
VAL was delighted to welcome Anne to present a workshop during our 2013 Future Focus Conference. Anne's workshop was all about helping charities tell a compelling story about what they do and the impact they have.
Specifically, Anne's workshop looked at the benefits of measuring impact, information about the 'theory of change process' to help charities understand what outcomes they are aiming to achieve, and helped groups start thinking about the type of data they need and how best to collect that data.
While the 2013 Future Focus conference is now over, VAL runs trainings and workshops year-round. If you'd like to learn more about training for your organisation, visit www.Valoneline.org.uk.
Running head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docxsusanschei
Running head: CHICK-FILL-A ACTION PLAN
1
CHICK-FILL-A ACTION PLAN
2
Chick-fil-A Action Plan
Chick-fil-A Action Plan
The Chick-fil-a organization hired Team C as consultants to address needs this organization feels needs resolving. Team C was able to develop a project plan that will address the needs of this organization. Team C created an action plan to help the organization better understand the process of the project plan. In this action plan summary, Team C will discuss the data collection process, the different measures used to for the action plan, taking the data collection and converting into monetary value. Team C will discuss how to take all this information and utilized it terms of training and development for Chick-fil-A.
Data Collection
Collecting data for the project is done to assess how well the project is meeting, or has met the stated goals and objectives. According to Shays (2005), “Data collection is a delaying tactic to decide where to begin" (pg1) There are multiple ways to collect the data including questionnaires, surveys, interviews, and focus groups are types of data collection that require human interaction. Collection methods would be ideal in helping the project team evaluate the differences in customer service, before and after the training. Measurement tests, simulations, and observation are a few other ways for the researchers to collect data. Measurement tests, simulations, and observation are meant to evaluate the depth of the employees learning or practical application of the skills taught as a part of the training program. This is ideal for Chik-fil-A’s training program evaluation. Project teams must research each data collection method to ensure they have selected the best methods of collecting the data for their project, so they are able to properly assess the project's performance (Phillips et al., 2015).
Methods for Data Collection
To ensure project success, the consultant will utilize different data collecting methods. The consultants will conduct interviews with top management to gather input at the reaction level. These interviews will provide a clear understanding of how main stockholders and top management perceive the project, how useful do they believe it will be and how effective do they perceive the chosen communication methods. Feedback on the reaction level is essential to ensure customer satisfaction.
Data collection for the learning level will consist of a small survey given to the trainees. A small survey with mainly close-ended questions will provide information on how the new training program skills are received. Addressing the learning objective will be key in knowing if the trainees understand the new tools and if the learning process is being effective.
Application and implementation level are to measure the various times during the project. Prior every location manager is trained, they will be provided with an action plan with step by steps instructions on h ...
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...OECD CFE
Presentation by Jonathan Potter, OECD LEED Senior Policy Analyst, and Stuart Thompson, OECD LEED Policy Analys, tat the seminar organised by the OECD LEED Trento Centre for the Officers of the Autonomous Province of Trento on 13 November 2015.
https://www.trento.oecd.org
A Pulse of Predictive Analytics In Higher Education │ Civitas LearningCivitas Learning
Civitas Learning presents the findings of our survey conducted during the September 2014 Civitas Learning Summit, where more than 100 leaders representing 40 Pioneer Partner institutions gathered to share more on their work. The survey, distributed to all participants, resulted in 74 responses highlighting how this cross-section of higher education institutions are using advanced analytics to power student success initiatives.
The responsibility of the board of directors of a nonprofit is not simply to fund raise or review financials, but rather to ensure that the money raised is used well. To create impact.
Impact Management Principles. EVPA, European Venture Philanthropy AssociationDominique Gross
EVPA’s guidance for impact management and Social Value International’s
(SVI) Principles are in many ways interlinked. This document shows EVPA and SVI’s position on impact measurement and management.
The European Venture Philanthropy Association (EVPA) supports a fivestep
process to help organisations measure and manage their social
impact. These steps aim to help venture philanthropy organisations and
social investors (VPO/SIs) and social purpose organisations (SPOs) to implement a system to collect information in order to improve the products and services offered to the final beneficiaries.
490The Future of EvaluationOrienting Questions1. H.docxblondellchancy
490
The Future of Evaluation
Orienting Questions
1. How are future program evaluations likely to be different from current evaluations in
• the way in which political considerations are handled?
• the approaches that will be used?
• the involvement of stakeholders?
• who conducts them?
2. How is evaluation like some other activities in organizations?
3. How is evaluation viewed differently in other countries?
We have reached the last chapter of this book, but we have only begun to share
what is known about program evaluation. The references we have made to other
writings reflect only a fraction of the existing literature in this growing field. In
choosing to focus attention on (1) alternative approaches to program evaluation,
and (2) practical guidelines for planning, conducting, reporting, and using evalu-
ation studies, we have tried to emphasize what we believe is most important to
include in any single volume that aspires to give a broad overview of such a complex
and multifaceted field. We hope we have selected well, but we encourage students
and evaluation practitioners to go beyond this text to explore the richness and
depth of other evaluation literature. In this final chapter, we share our perceptions
and those of a few of our colleagues about evaluation’s future.
The Future of Evaluation
Hindsight is inevitably better than foresight, and ours is no exception. Yet present
circumstances permit us to hazard a few predictions that we believe will hold true
for program evaluation in the next few decades. History will determine whether
18
Chapter 18 • The Future of Evaluation 491
Predictions Concerning the Profession
of Evaluation
1. Evaluation will become an increasingly useful force in our society. As
noted, evaluation will have increasing impacts on programs, on organizations, and
on society. Many of the movements we have discussed in this text—performance
monitoring, organizational learning, and others—illustrate the increasing interest
in and impact of evaluation in different sectors. Evaluative means of thinking will
improve ways of planning and delivering programs and policies to achieve their
intended effects and, more broadly, improve society.
2. Evaluation will increase in the United States and in other developed
countries as the pressure for accountability weighs heavily on governments and
nonprofit organizations that deliver vital services. The emphasis on accountability
and data-based decision making has increased dramatically in the first decade of
the twenty-first century. Also, virtually every trend points to more, not less, eval-
uation in the public, private, and nonprofit sectors in the future. In some organi-
zations, the focus is on documenting outcomes in response to external political
pressures. In other organizations, evaluation is being used for organizational
growth and development, which should, ultimately, improve the achievement of
those outcomes. In each context, however, evaluation is in dema ...
For this assignment, review the articleAbomhara, M., & Koie.docxsleeperharwell
For this assignment, review the article:
Abomhara, M., & Koien, G.M. (2015). Cyber security and the internet of things: Vulnerabilities, threats, intruders, and attacks.
Journal of Cyber Security, 4
, 65-88. Doi: 10.13052/jcsm2245-1439.414
and evaluate it in 3 pages (800 words), in APA format with in-text citation using your own words, by addressing the following:
What did the authors investigate, and in general how did they do so?
Identify the hypothesis or question being tested
Summarize the overall article.
Identify the conclusions of the authors
Indicate whether or not you think the data support their conclusions/hypothesis
Consider alternative explanations for the results
Provide any additional comments pertaining to other approaches to testing their hypothesis (logical follow-up studies to build on, confirm or refute the conclusions)
The relevance or importance of the study
The appropriateness of the experimental design
When you write your evaluation, be brief and concise, this is not meant to be an essay but an objective evaluation that one can read very easily and quickly. Also, you should include a complete reference (title, authors, journal, issue, pages) you turn in your evaluation. This is good practice for your literature review, which you’ll be completing during the dissertation process.
.
For this assignment, provide your perspective about Privacy versus N.docxsleeperharwell
For this assignment, provide your perspective about Privacy versus National Security
. This is a particularly "hot topic" because of recent actions by the federal government taken against Apple. So, please use information from reliable sources to support your perspective.
This assignment should be 1.5 pages in length, using Times New Roman font (size 12), double spaced on a Word documen
.
More Related Content
Similar to CEP • CEI 1BENCHMARKING FoundationEvaluationPra.docx
New Frameworks for Measuring Capacity and Assessing PerformanceTCC Group
If we start with the assumption that — in order to improve our social sector as a whole — those who do the work to strengthen our communities (the nonprofits) are equally as critical as those responsible for providing the resources for the work to get done (the foundations), then why wouldn’t we expect all social sector actors to build their capacity? How do we know when our grantees and our foundations are becoming more effective and impactful as a result of our capacity investments, organizational development efforts and technical assistance? What does a high performing organization or foundation look like? And can we measure that?
This presentation, provided during the Grantmakers for Effective Organizations 2016 National Conference in Minneapolis, reviews and demonstrates existing resources for assessing nonprofit and foundation capacity and effectiveness. Speakers introduced the pros and cons of a variety of rubrics in use in the field and offered guidance on how funders decide on the right fit for the desired purpose. Grantmaker peers also shared how they used different frameworks and tools to assess individual nonprofits and grantee cohorts. Session participants left with increased awareness of the importance of the facilitator’s role in interpreting data gleaned from assessments and of the data collection methods most appropriate for their organization.
Anne is Deputy Head of the Measurement and Evaluation at New Philanthropy Capital (NCP) and helps charities and funders to measure and communicate their impact. Her role includes developing tools and approaches for improving impact measurement for a wide range of organisations.
VAL was delighted to welcome Anne to present a workshop during our 2013 Future Focus Conference. Anne's workshop was all about helping charities tell a compelling story about what they do and the impact they have.
Specifically, Anne's workshop looked at the benefits of measuring impact, information about the 'theory of change process' to help charities understand what outcomes they are aiming to achieve, and helped groups start thinking about the type of data they need and how best to collect that data.
While the 2013 Future Focus conference is now over, VAL runs trainings and workshops year-round. If you'd like to learn more about training for your organisation, visit www.Valoneline.org.uk.
Running head CHICK-FILL-A ACTION PLAN 1CHICK-FILL-A ACTIO.docxsusanschei
Running head: CHICK-FILL-A ACTION PLAN
1
CHICK-FILL-A ACTION PLAN
2
Chick-fil-A Action Plan
Chick-fil-A Action Plan
The Chick-fil-a organization hired Team C as consultants to address needs this organization feels needs resolving. Team C was able to develop a project plan that will address the needs of this organization. Team C created an action plan to help the organization better understand the process of the project plan. In this action plan summary, Team C will discuss the data collection process, the different measures used to for the action plan, taking the data collection and converting into monetary value. Team C will discuss how to take all this information and utilized it terms of training and development for Chick-fil-A.
Data Collection
Collecting data for the project is done to assess how well the project is meeting, or has met the stated goals and objectives. According to Shays (2005), “Data collection is a delaying tactic to decide where to begin" (pg1) There are multiple ways to collect the data including questionnaires, surveys, interviews, and focus groups are types of data collection that require human interaction. Collection methods would be ideal in helping the project team evaluate the differences in customer service, before and after the training. Measurement tests, simulations, and observation are a few other ways for the researchers to collect data. Measurement tests, simulations, and observation are meant to evaluate the depth of the employees learning or practical application of the skills taught as a part of the training program. This is ideal for Chik-fil-A’s training program evaluation. Project teams must research each data collection method to ensure they have selected the best methods of collecting the data for their project, so they are able to properly assess the project's performance (Phillips et al., 2015).
Methods for Data Collection
To ensure project success, the consultant will utilize different data collecting methods. The consultants will conduct interviews with top management to gather input at the reaction level. These interviews will provide a clear understanding of how main stockholders and top management perceive the project, how useful do they believe it will be and how effective do they perceive the chosen communication methods. Feedback on the reaction level is essential to ensure customer satisfaction.
Data collection for the learning level will consist of a small survey given to the trainees. A small survey with mainly close-ended questions will provide information on how the new training program skills are received. Addressing the learning objective will be key in knowing if the trainees understand the new tools and if the learning process is being effective.
Application and implementation level are to measure the various times during the project. Prior every location manager is trained, they will be provided with an action plan with step by steps instructions on h ...
Evaluation of SME and entreprenuership programme - Jonathan Potter & Stuart T...OECD CFE
Presentation by Jonathan Potter, OECD LEED Senior Policy Analyst, and Stuart Thompson, OECD LEED Policy Analys, tat the seminar organised by the OECD LEED Trento Centre for the Officers of the Autonomous Province of Trento on 13 November 2015.
https://www.trento.oecd.org
A Pulse of Predictive Analytics In Higher Education │ Civitas LearningCivitas Learning
Civitas Learning presents the findings of our survey conducted during the September 2014 Civitas Learning Summit, where more than 100 leaders representing 40 Pioneer Partner institutions gathered to share more on their work. The survey, distributed to all participants, resulted in 74 responses highlighting how this cross-section of higher education institutions are using advanced analytics to power student success initiatives.
The responsibility of the board of directors of a nonprofit is not simply to fund raise or review financials, but rather to ensure that the money raised is used well. To create impact.
Impact Management Principles. EVPA, European Venture Philanthropy AssociationDominique Gross
EVPA’s guidance for impact management and Social Value International’s
(SVI) Principles are in many ways interlinked. This document shows EVPA and SVI’s position on impact measurement and management.
The European Venture Philanthropy Association (EVPA) supports a fivestep
process to help organisations measure and manage their social
impact. These steps aim to help venture philanthropy organisations and
social investors (VPO/SIs) and social purpose organisations (SPOs) to implement a system to collect information in order to improve the products and services offered to the final beneficiaries.
490The Future of EvaluationOrienting Questions1. H.docxblondellchancy
490
The Future of Evaluation
Orienting Questions
1. How are future program evaluations likely to be different from current evaluations in
• the way in which political considerations are handled?
• the approaches that will be used?
• the involvement of stakeholders?
• who conducts them?
2. How is evaluation like some other activities in organizations?
3. How is evaluation viewed differently in other countries?
We have reached the last chapter of this book, but we have only begun to share
what is known about program evaluation. The references we have made to other
writings reflect only a fraction of the existing literature in this growing field. In
choosing to focus attention on (1) alternative approaches to program evaluation,
and (2) practical guidelines for planning, conducting, reporting, and using evalu-
ation studies, we have tried to emphasize what we believe is most important to
include in any single volume that aspires to give a broad overview of such a complex
and multifaceted field. We hope we have selected well, but we encourage students
and evaluation practitioners to go beyond this text to explore the richness and
depth of other evaluation literature. In this final chapter, we share our perceptions
and those of a few of our colleagues about evaluation’s future.
The Future of Evaluation
Hindsight is inevitably better than foresight, and ours is no exception. Yet present
circumstances permit us to hazard a few predictions that we believe will hold true
for program evaluation in the next few decades. History will determine whether
18
Chapter 18 • The Future of Evaluation 491
Predictions Concerning the Profession
of Evaluation
1. Evaluation will become an increasingly useful force in our society. As
noted, evaluation will have increasing impacts on programs, on organizations, and
on society. Many of the movements we have discussed in this text—performance
monitoring, organizational learning, and others—illustrate the increasing interest
in and impact of evaluation in different sectors. Evaluative means of thinking will
improve ways of planning and delivering programs and policies to achieve their
intended effects and, more broadly, improve society.
2. Evaluation will increase in the United States and in other developed
countries as the pressure for accountability weighs heavily on governments and
nonprofit organizations that deliver vital services. The emphasis on accountability
and data-based decision making has increased dramatically in the first decade of
the twenty-first century. Also, virtually every trend points to more, not less, eval-
uation in the public, private, and nonprofit sectors in the future. In some organi-
zations, the focus is on documenting outcomes in response to external political
pressures. In other organizations, evaluation is being used for organizational
growth and development, which should, ultimately, improve the achievement of
those outcomes. In each context, however, evaluation is in dema ...
For this assignment, review the articleAbomhara, M., & Koie.docxsleeperharwell
For this assignment, review the article:
Abomhara, M., & Koien, G.M. (2015). Cyber security and the internet of things: Vulnerabilities, threats, intruders, and attacks.
Journal of Cyber Security, 4
, 65-88. Doi: 10.13052/jcsm2245-1439.414
and evaluate it in 3 pages (800 words), in APA format with in-text citation using your own words, by addressing the following:
What did the authors investigate, and in general how did they do so?
Identify the hypothesis or question being tested
Summarize the overall article.
Identify the conclusions of the authors
Indicate whether or not you think the data support their conclusions/hypothesis
Consider alternative explanations for the results
Provide any additional comments pertaining to other approaches to testing their hypothesis (logical follow-up studies to build on, confirm or refute the conclusions)
The relevance or importance of the study
The appropriateness of the experimental design
When you write your evaluation, be brief and concise, this is not meant to be an essay but an objective evaluation that one can read very easily and quickly. Also, you should include a complete reference (title, authors, journal, issue, pages) you turn in your evaluation. This is good practice for your literature review, which you’ll be completing during the dissertation process.
.
For this assignment, provide your perspective about Privacy versus N.docxsleeperharwell
For this assignment, provide your perspective about Privacy versus National Security
. This is a particularly "hot topic" because of recent actions by the federal government taken against Apple. So, please use information from reliable sources to support your perspective.
This assignment should be 1.5 pages in length, using Times New Roman font (size 12), double spaced on a Word documen
.
For this assignment, provide your perspective about Privacy vers.docxsleeperharwell
For this assignment, provide your perspective about Privacy versus National Security
. This is a particularly "hot topic" because of recent actions by the federal government taken against Apple. So, please use information from reliable sources to support your perspective.
This assignment should be 1.5 pages in length, using Times New Roman font (size 12), double spaced on a Word document.
.
For this Assignment, read the case study for Claudia and find two to.docxsleeperharwell
For this Assignment, read the case study for Claudia and find two to three scholarly articles on social issues surrounding immigrant families.
In a 2- to 4-page paper, explain how the literature informs you about Claudia and her family when assessing her situation.
Describe two social issues related to the course-specific case study for Claudia that inform a culturally competent social worker.
Describe culturally competent strategies you might use to assess the needs of children.
Describe the types of data you would collect from Claudia and her family in order to best serve them.
Identify other resources that may offer you further information about Claudia’s case.
Create an eco-map to represent Claudia’s situation. Describe how the ecological perspective of assessment influenced how the social worker interacted with Claudia.
Describe how the social worker in the case used a strengths perspective and multiple tools in her assessment of Claudia. Explain how those factors contributed to the therapeutic relationship with Claudia and her family.
.
For this assignment, please start by doing research regarding the se.docxsleeperharwell
For this assignment, please start by doing research regarding the severity of prejudicial aggression/violence from the past. After you do this, research the severity of prejudicial aggression/violence that has gone on in the past decade. Target the same specific groups that have been the aggressor and victim in both your historical group and your present-day group. For instance, if you choose "black vs. white" in the 1950s, you must use the same group for your present-day group. Once you do this, discuss various ways that it is the same, as well as why it is different between the time periods. What influences have changed? Why is it better now, or worse now than in the past? Please discuss how the advancements in media (news, entertainment, and social media) have had on this issue, along with whatever you come up with outside of media influence. Make sure you back your information up with citations from your sources.
.
For this assignment, please discuss the following questionsWh.docxsleeperharwell
For this assignment, please discuss the following questions?
What was the name of the first computer network?
Who created this network
When did this network got established?
Explain one of the major disadvantages of this network at its initial stage
What is TCP?
Who created TCP?
What is IP?
When did it got implemented
How did the implementation of TCP/IP revolutionize communication technology?
Requirements:
You must write a minimum of two paragraphs, with two different citations, and every paragraph should have at least four complete sentences for each question. Every question should have a subtitle (Bold and Centered). You must also respond to at least two of your classmates’ posts with at least 100 words each before the due date. You need to use the discussion board header provided in the getting started folder. Please proofread your work before posting your assignment.
.
For this assignment, locate a news article about an organization.docxsleeperharwell
For this assignment, locate a news article about an organization who experienced an ethical issue related to communication. In 1,200 to 1,550 words, complete the following:
Discuss the circumstances of the incident, the organization’s decision making process, and the public and media reaction to the organization’s decision.
Presume you have been hired by that organization to help strengthen their communication efforts. Outline at least
four strategies
you would recommend the organization follow in the future to enhance the ethics of their communication.
.
For this assignment, it requires you Identifies the historic conte.docxsleeperharwell
For this assignment, it requires you Identifies the historic context of ideas and cultural traditions outside the U.S., and how they have influenced American culture.
Topic for this paper:
The history of ramen (technically started in China, moved and developed in Japan) now a pop culture cuisine in the U.S.
The paper should be in APA format and two full pages with double-spaced. Also, since you are researching and writing about new information, be sure cite your source (website name, address, date you visited it) at the end of the two pages, so I know where you got your information.
.
For this assignment, create a framework from which an international .docxsleeperharwell
For this assignment, create a framework from which an international human resource management function can address cultural challenges. Within your framework, devise a model that includes due diligence steps, merger steps, and post-merger steps that specifically address cultural acclimation and environmental acclimation, as well as bringing two workforces together.
Supported by a minimum of two academic sources.
.
For this assignment, create a 15-20 slide digital presentation in tw.docxsleeperharwell
For this assignment, create a 15-20 slide digital presentation in two parts to educate your colleagues about meeting the needs of specific ELLs and making connections between school and family.
Part 1
In the first part of your presentation, provide your colleagues with useful information about unique factors that affect language acquisition among LTELs, RAELs, and SIFEs.
This part of the presentation should include:
A description of the characteristics of LTELs, RAELs, and SIFEs
An explanation of the cultural, sociocultural, psychological, or political factors that affect the language acquisition of LTELs, RAELs, and SIFEs
A discussion of factors that affect the language acquisition of refugee, migrant, immigrant and Native American ELLs and how each of these ELLs may relate to LTELs, RAEL, or SIFEs
A discussion of additional factors that affect the language acquisition of grades K-12 LTELs, RAEL, and SIFEs
Part 2
In the second part of the presentation, recommend culturally inclusive practices within curriculum and instruction. Provide useful resources that would empower the family members of ELLs.
This part of the presentation should include:
Examples of curriculum and materials, including technology, that promote a culturally inclusive classroom environment.
Examples of strategies that support culturally inclusive practices.
A brief description of how home and school partnerships facilitate learning.
At least two resources for families of ELLs that would empower them to become partners in their child’s academic achievement.
Presenter’s notes, title, and reference slides that contain 3-5 scholarly resources.
.
For this assignment, you are to complete aclinical case - narrat.docxsleeperharwell
For this assignment, you are to complete a
clinical case - narrated PowerPoint report
that will follow the SOAP note example provided below. The case report will be based on the clinical case scenario list below.
You are to approach this clinical scenario as if it is a real patient in the clinical setting.
Instructions:
Step 1
- Read the assigned clinical scenario and using your clinical reasoning skills, decide on the diagnoses. This step informs your next steps.
Step 2
- Document the given information in the case scenario under the appropriate sections, headings, and subheadings of the SOAP note.
Step 3
- Document all the classic symptoms typically associated with the diagnoses in Step 1. This information may NOT be given in the scenario; you are to obtain this information from your textbooks. Include APA citations.
Example of Steps 1 - 3:
You decided on Angina after reading the clinical case scenario (Step 1)
Review of Symptoms (list of classic symptoms):
CV: sweating, squeezing, pressure, heaviness, tightening, burning across the chest starting behind the breastbone
GI: indigestion, heartburn, nausea, cramping
Pain: pain to the neck, jaw, arms, shoulders, throat, back, and teeth
Resp: shortness of breath
Musculo: weakness
Step 4
– Document the abnormal physical exam findings typically associated with the acute and chronic diagnoses decided on in Step 1. Again, this information may NOT be given. Cull this information from the textbooks. Include APA citations.
Example of Step 4:
You determined the patient has Angina in Step 1
Physical Examination (list of classic exam findings):
CV: RRR, murmur grade 1/4
Resp: diminished breath sounds left lower lobe
Step 5
- Document the diagnoses in the appropriate sections, including the ICD-10 codes, from Step 1. Include three differential diagnoses. Define each diagnosis and support each differential diagnosis with pertinent positives and negatives and what makes these choices plausible. This information may come from your textbooks. Remember to cite using APA.
Step 6
- Develop a treatment plan for the diagnoses.
Only
use National Clinical Guidelines to develop your treatment plans. This information will not come from your textbooks. Use your research skills to locate appropriate guidelines. The treatment plan
must
address the following:
a) Medications (include the dosage in mg/kg, frequency, route, and the number of days)
b) Laboratory tests ordered (include why ordered and what the results of the test may indicate)
c) Diagnostic tests ordered (include why ordered and what the results of the test may indicate)
d) Vaccines administered this visit & vaccine administration forms given,
e) Non-pharmacological treatments
f) Patient/Family education including preventive care
g) Anticipatory guidance for the visit (be sure to include exactly what you discussed during the visit; review Bright Futures website for this section)
h) Follow-up appointment with a.
For this assignment, you are to complete aclinical case - narr.docxsleeperharwell
For this assignment, you are to complete a
clinical case - narrated PowerPoint report
that will follow the SOAP note example provided below. The case report will be based on the clinical case scenario list below.
You are to approach this clinical scenario as if it is a real patient in the clinical setting.
Instructions:
Step 1
- Read the assigned clinical scenario and using your clinical reasoning skills, decide on the diagnoses. This step informs your next steps.
Step 2
- Document the given information in the case scenario under the appropriate sections, headings, and subheadings of the SOAP note.
Step 3
- Document all the classic symptoms typically associated with the diagnoses in Step 1. This information may NOT be given in the scenario; you are to obtain this information from your textbooks. Include APA citations.
Example of Steps 1 - 3:
You decided on Angina after reading the clinical case scenario (Step 1)
Review of Symptoms (list of classic symptoms):
CV: sweating, squeezing, pressure, heaviness, tightening, burning across the chest starting behind the breastbone
GI: indigestion, heartburn, nausea, cramping
Pain: pain to the neck, jaw, arms, shoulders, throat, back, and teeth
Resp: shortness of breath
Musculo: weakness
Step 4
– Document the abnormal physical exam findings typically associated with the acute and chronic diagnoses decided on in Step 1. Again, this information may NOT be given. Cull this information from the textbooks. Include APA citations.
Example of Step 4:
You determined the patient has Angina in Step 1
Physical Examination (list of classic exam findings):
CV: RRR, murmur grade 1/4
Resp: diminished breath sounds left lower lobe
Step 5
- Document the diagnoses in the appropriate sections, including the ICD-10 codes, from Step 1. Include three differential diagnoses. Define each diagnosis and support each differential diagnosis with pertinent positives and negatives and what makes these choices plausible. This information may come from your textbooks. Remember to cite using APA.
Step 6
- Develop a treatment plan for the diagnoses.
Only
use National Clinical Guidelines to develop your treatment plans. This information will not come from your textbooks. Use your research skills to locate appropriate guidelines. The treatment plan
must
address the following:
a) Medications (include the dosage in mg/kg, frequency, route, and the number of days)
b) Laboratory tests ordered (include why ordered and what the results of the test may indicate)
c) Diagnostic tests ordered (include why ordered and what the results of the test may indicate)
d) Vaccines administered this visit & vaccine administration forms given,
e) Non-pharmacological treatments
f) Patient/Family education including preventive care
g) Anticipatory guidance for the visit (be sure to include exactly what you discussed during the visit; review Bright Futures website for this section)
h) Follow-up appointment wit.
For this assignment, you are provided with four video case studies (.docxsleeperharwell
For this assignment, you are provided with four video case studies (linked in the Resources). Review the cases of Julio and Kimi, and choose either Reese or Daneer for the third case. Review these two videos: •The Case of Julio: Julio is a 36-year-old single gay male. He is of Cuban descent. He was born and raised in Florida by his parents with his two sisters. He attended community college but did not follow through with his plan to obtain a four-year degree, because his poor test taking skills created barriers. He currently works for a sales promotion company, where he is tasked with creating ads for local businesses. He enjoys the more social aspects of his job, but tracking the details is challenging and has caused him to lose jobs in the past. He has been dating his partner, Justin, for five years. Justin feels it is time for them to commit and build a future. Justin is frustrated that Julio refuses to plan the wedding and tends to blame Julio’s family. While Julio’s parents hold some traditional religious values, they would welcome Justin into the family but are respectfully waiting for Julio to make his plans known. Justin is as overwhelmed by the details at home as he is at work. •The Case of Kimi: Kimi is a 48-year-old female currently separated from her husband, Robert, of 16 years. They have no children, which was consistent with Kimi’s desire to focus on her career as a sales manager. She told Robert a pregnancy would wreck her efforts to maintain her body. His desire to have a family was a goal he decided he needed to pursue with someone else. He left Kimi six months ago for a much younger woman and filed for divorce. Kimi began having issues with food during high school when she was on the dance team and felt self-conscious wearing the form-fitting uniform. During college, she sought treatment because her roommate became alarmed by her issues around eating. She never told her parents about this and felt it was behind her. Her parents are Danish and value privacy. They always expected Kimi to be independent. Her lack of communication about her private life did not concern them. They are troubled by Robert’s behavior and consider his conspicuous infidelity as a poor reflection upon their family. Kimi has moved in with her parents while she and Robert are selling the house, which has upended the balance in their relationship. For a third case, choose one of these videos: •The Case of Reese: -Reese is a 44-year-old married African American female. Her parents live in another state, and she is their only child. Her father is a retired Marine Lieutenant Colonel who was stationed both in the United States and overseas while Reese was growing up. She entered the Air Force as soon as she graduated high school at age 17 and has achieved the rank of Chief Master Sergeant. She has been married 15 years to John, and they recently discovered she is pregnant. The unexpected pregnancy has been quite disorienting for someone who has planned.
For this assignment, you are going to tell a story, but not just.docxsleeperharwell
For this assignment, you are going to tell a story, but not just any story. It will be a First Nations story, and it will be your version of it.
Choose one of the two stories at the end of this unit, either "Why the Flint-Rock Cannot Fight Back"
You can write of yourself telling one of the stories.
In telling your story, here is what you will need to consider:
Clarity of speech
Intonation
Pacing and pauses
You will also have to work out how to make this telling of the story yours. You might want to read it aloud with point form notes for a prompt or to memorize it. Perhaps you want to rewrite it so that it sounds more like your words. Maybe you will change names and place-names to those you are familiar with. If you are making a video or performing this live, you should practice facial and hand gestures as well as stance and body language. The purpose of all of this is to bring your own meaning to the story.
HERE IS THE STORY
Why the Flint-Rock Cannot Fight Back
Sto-Way’-Na—Flint—was rich and powerful. His lodge was toward the sunrise. It was guarded by Squr-hein— Crane. He was the watcher. He watched from the top of a lone tree. When anybody approached, Crane would call out and warn Flint, and Flint would come out of his lodge and meet the visitor.
There was an open flat in front of the lodge. Flint met all his visitors there. Warriors and hunters came and bought flint for arrow-points and spear-heads. They paid Flint big prices for the privilege of chipping off the hard stone. Some who needed flint for their weapons were poor and could not buy. These poor persons Flint turned away.
Coyote heard about Flint and, as he wanted some arrow-points, he asked his squas-tenk’ to help him. Squas-tenk’ refused.
“Hurry, do what I ask, or I will throw you away and let the rain wash you— wash you cold,” said Coyote, and then the power gave him three rocks that were harder than the flint-rock. It also gave him a little dog that had only one ear. But this ear was sharp, like a knife; it was a knife- ear.
Then to his wife, Mole, Coyote said: “Go and make your underground trails in the flat where Sto-way’-na lives. When you have finished and see me talking with him, show yourself so we can see you.”
Then Coyote set out for Flint’s lodge. As he got near it, he had his power make a fog to cover the land, and thick fog spread over everything. Crane, the watcher, up in the lone tree, could not see Coyote. He did not know that Coyote was around.
Coyote climbed the tree and took Crane from his high perch and broke his neck. Crane had no time to cry out. Then Coyote went on to Flint’s lodge. He was almost there when Flint’s dog, Grizzly Bear, jumped out of the lodge and ran toward him.
Coyote was not scared, and he yelled at Flint: “Stop your grizzly bear dog! Stop him, or my dog will kill him.”
That amused Flint, who was looking through the doorway. He saw that Coyote’s one-eared dog was very small, hardly a mouthful for Grizzly Bear. Fli.
For this assignment, you are asked to prepare a Reflection Paper. Af.docxsleeperharwell
For this assignment, you are asked to prepare a Reflection Paper. After you finish the reading assignment, reflect on the concepts and write about it. What do you understand completely? What did not quite make sense? The purpose of this assignment is to provide you with the opportunity to reflect on the material you finished reading and to expand upon those thoughts
A Reflection Paper is an opportunity for you to express your thoughts about the material by writing about them.
The writing you submit must meet the following requirements:
be at least two pages;
include your thoughts about the main topics
APA Stlye
.
For this assignment, you are asked to prepare a Reflection Paper. .docxsleeperharwell
For this assignment, you are asked to prepare a Reflection Paper. After you finish the reading assignment, reflect on the concepts and write about it. What do you understand completely? What did not quite make sense? The purpose of this assignment is to provide you with the opportunity to reflect on the material you finished reading and to expand upon those thoughts. If you are unclear about a concept, either read it again, or ask your professor. Can you apply the concepts toward your career? How?
This is not a summary. A Reflection Paper is an opportunity for you to express your thoughts about the material by writing about them.
The writing you submit must meet the following requirements:
be at least two pages;
include your thoughts about the main topics; and
include financial performance, quality performance, and personnel performance.
Format the Reflection Paper in your own words using APA style, and include citations and references as needed to avoid instances of plagiarism.
The reading assignment that you are to reflect on is Chapter 11, in the text. My written lecture for this Unit is basically a reflection on Chapter 11. Find an interesting part or two of the chapter and tell me what you got out of it. It's not a hard assignment. If you read my lecture, you will see the part of Chapter 11 that intrigued me the most was the subject of codetermination on page 367. Anything that intrigues you in Chapter 11 is fine with me.
Written Lecture
Does the ringisei decision-making process by consensus, which is used by the Japanese, reach the same conclusion as the top-down methods, which are used by American management? Some might label the Japanese decision-making system as simply procrastination. Others appreciate the method and expect productive outcomes. One major challenge is to build an organizational culture to adopt the practice of ringisei. If only half of an organization uses ringisei, it is likely to cause miscommunication and result in frustration.
The ringisei is based on the theory that the employee is an important part of the overall success of an enterprise. It is common to hear a lot about
empowering the employees
. Is creativity and innovation rewarded, ignored, or punished for the lower level employee in America?
Could the Japanese system of decision making have led to the controversy of what Toyota knew about unintended acceleration problems? This may be the best example of the use of silence in the Japanese culture frustrating Americans as a nation. This is not an explicit accusation of Toyota or of Japanese culture. Rather, it is inserted here to demonstrate potential consequences of management methods, processes, systems, and decision making. Read pages 106-108 of Luthans and Doh (2012) concerning this topic. The cause of the unintended acceleration problem announced by the United States government was due to bad floor mats or driver error. Initially, electronic problems were not mentioned.
The March 2011 Fuku.
For this assignment, you are asked to conduct some Internet research.docxsleeperharwell
For this assignment, you are asked to conduct some Internet research on any malware, virus or DOS attack. Summarize your findings in 3-4 paragraphs and be sure to include a link to your reference source. Explain this occurrence in your own words (do not just copy and paste what you find on the Internet).
Include the following information:
1. Name of the Malware or Virus
2. When this incident occurred (date)
3. Impact it had or explanation of the damage it caused
4. How it was detected
5. Reference source citation
.
For this assignment, you are a professor teaching a graduate-level p.docxsleeperharwell
For this assignment, you are a professor teaching a graduate-level public administration administrative law course at a traditional state university. Your task is to develop a formal presentation providing an overview of administrative law—specifically by comparing and contrasting the key defining aspects of administrative law within the American three-branch federal government structure, explaining how these functions are overseen/regulated, and ultimately, interpreting how they serve the common good of the public-at-large.
Your presentation must include the following with specific examples:
Articulate an understanding of how federal agencies enforce their regulations.
Explain the fundamental role that agency rulemaking plays in regulating society-at-large.
Compare both formal rulemaking and informal rulemaking.
Articulate the similarities and differences between rulemaking and adjudication.
Analyze the various methods of oversight exercised by the judicial, legislative, and executive branches of the federal government over administrative agencies.
Articulate how special interest groups (to include the media) can influence and/or shape public opinion about administrative agencies and place a spotlight on individual policies.
Incorporate appropriate animations, transitions, and graphics as well as speaker notes for each slide. The speaker notes may be comprised of brief paragraphs or bulleted lists and should cite material appropriately. Add audio to each slide using the
Media
section of the
Insert
tab in the top menu bar for each slide.
Support your presentation with at least seven scholarly resources
.
In addition to these specified resources, other appropriate scholarly resources may be included.
Length: 15 slides (with a separate reference slide)
Notes Length: 200-350 words for
each slide
Be sure to include citations for quotations and paraphrases with references in APA format and style where appropriate.
.
For this assignment, we will be visiting the PBS website,Race .docxsleeperharwell
For this assignment, we will be visiting the PBS website,
Race: The Power of Illusion
. Click on the "Learn More" link, and proceed to visit these links:
What is Race? (View All)
Sorting People (Complete both "Begin Sorting" and "Explore Traits")
Race Timeline (View All)
Human Diversity (Complete both the Quiz and "Explore Diversity")
Me, My Race & I (View Slideshow Menu)
Where Race Lives (View All)
Given the
enormous
amount of information presented in this website, discuss what was most interesting and surprising to you in
EAC
H of the links.
Post your 200 word assignment.
Discussion Board Activity:
Now that you have learned that the race is a social concept rather than a biological truth respond to TWO fellow students with your thoughts on prejudice and discrimination pertaining to deviance, social class, and race.
(I'll send you two replies)
Due November 3rd
.
For this assignment, the student starts the project by identifying a.docxsleeperharwell
For this assignment, the student starts the project by identifying a clinical population of interest. Then, the student is to locate (10) nursing research articles from peer-reviewed nursing journals that reflect the clinical population of their interest. From the articles, the student identifies what has been researched and is currently known about their clinical population. The student is to write a summary of each article in a tabular format and submit a single summary table of all articles that provides a review of current knowledge on the selected population ( example and form will be provided ).
.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Overview on Edible Vaccine: Pros & Cons with Mechanism
CEP • CEI 1BENCHMARKING FoundationEvaluationPra.docx
1. CEP • CEI | 1
BENCHMARKING
Foundation
Evaluation
Practices
2 3
FOR MORE INFORMATION, CONTACT
Ellie Buteau, Ph.D.
Vice President – Research
617-492-0800 ext. 213
[email protected]
Julia Coffman
Director, Center for Evaluation Innovation
202-728-0727 ext. 116
[email protected]
AUTHORS
This report was prepared by the Center for Effective
Philanthropy
(CEP) together with the Center for Evaluation Innovation (CEI).
A
number of people contributed to the development of the survey
instrument, analysis of data, and creation of this research
report.
From CEP: Ellie Buteau, Jennifer Glickman, and Charis Loh.
From
CEI: Julia Coffman and Tanya Beer.
3. draft of the survey used for this research. The survey created for
this
research study drew, in part, from a survey originally created by
Patti
Patrizi and Elizabeth Heid Thompson in 2009. We are also
grateful to
Johanna Morariu for providing feedback on a draft of this
report.
The authors would like to thank CEP’s President, Phil
Buchanan, for
his contributions to this research, as well as CEP’s Art Director,
Sara
Dubois, for her design of the report.
This research is based on CEP and CEI’s independent data
analyses,
and CEP and CEI are solely responsible for its content. The
report does
not necessarily reflect the individual views of the funders,
advisers, or
others listed throughout this report.
For more information on CEP, please visit
www.effectivephilanthropy.org. For more information on CEI,
please
visit www.evaluationinnovation.org and
www.evaluationroundtable.org.
ABOUT THE CENTER FOR EVALUATION INNOVATION
MISSION
To provide data and create insight so
philanthropic funders can better define,
assess, and improve their effectiveness—
and, as a result, their intended impact.
4. MISSION
Our aim is to push
philanthropic and nonprofit
evaluation practice in new
directions and into new arenas.
We specialize in areas that are
challenging to assess, such as
advocacy and systems change.
CEP • CEI | 7
In 2015 and 2016, the Center for Effective Philanthropy and the
Center for Evaluation Innovation partnered for the first time to
benchmark current evaluation practices at foundations. We
wanted
to understand evaluation functions and staff roles at
foundations, the
relationship between evaluation and foundation strategy, the
level of
investment in and support of evaluation work, the specific
evaluation
activities foundations engage in, and the usefulness and use of
evaluation information once it is collected.
To explore these topics, we collected survey data from 127
individuals who were the most senior evaluation or program
staff
at their foundations (see Methodology). These individuals came
from independent and community foundations giving at least
$10
million annually, or foundations that were members of the
Evaluation
Roundtable—a network of foundation evaluation leaders who
5. seek
to support and improve evaluation practice in philanthropy.
The result of this effort is what we believe to be the most
comprehensive review ever undertaken of evaluation
practices at foundations.
It is our hope that the data presented in this report will help you
and
your foundation determine what evaluation systems and
practices align
best with your foundation’s strategy, culture, and ultimate
mission.
What resources should you invest in evaluation? On what
should your
evaluation efforts focus? How can you learn from and use
evaluation
information? We believe that considering these questions in
light of this
benchmarking data can allow you to more thoughtfully answer
these
questions. Ultimately, we hope the information in this report
helps you
prepare your foundation to better assess its progress toward its
goals
and its overall performance.
We hope you find this data useful.
Sincerely,
Ellie & Julia
September 2016
Julia Coffman
6. Director
Center for Evaluation Innovation
Ellie Buteau, Ph.D.
Vice President – Research
Center for Effective Philanthropy
Dear Colleague,
In the survey, we defined evaluation
and/or evaluation-related activities as
activities undertaken to systematically
assess and learn about the foundation’s
work, above and beyond final grant
or finance reporting, monitoring, and
standard due diligence practices.
DEFINITION OF EVALUATION
USED IN THIS STUDY
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 8 9
Respondent
Demographics
Thirty-eight percent of respondents
have had responsibility for
evaluation-related activities at the
foundation for two years or less.
<1 year
13%
7. 1-2 years
25%
6-8 years
14%
≥ 9 years
18%
3-5 years
30%
38%
Of respondents:
CEO
report to the CEO/President
62% 23% SENIORPROGRAM STAFF
report to senior or executive
level program staff
are evaluation staff
58%
are program staff
35%PROGRAM STAFFEVALUATION STAFF
ROLE AND TENURE
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 10 11
8. 45%
have an advanced degree in the social
sciences or applied research
37%
have received training in evaluation through
workshops or short courses
5%
have an advanced degree in evaluation
Foundation
DemographicsPREVIOUS EVALUATION TRAINING
Of respondents:
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 12 13
of foundations have a dedicated
evaluation unit or department,
separate from the program department
Of these departments:
34%
have had their name changed in
the past two years
21%
were newly created during the
past two years
9. 19%
have their own grantmaking
and/or contracting budget
79%
of foundations do not have a dedicated
evaluation unit or department
Of respondents at these foundations:
66%
work in program departments
89%
work in operations or
administration departments
20%
work in the President’s office or
executive office
19%
EVALUATION
DEPARTMENT
PROGRAM
PRESIDENT'S
/EXECUTIVE
10. OFFICE
OPERATIONS/
ADMINISTRATION
ORGANIZATIONAL STRUCTURE
Larger foundations are
more likely to have a
dedicated evaluation unit
or department.1
Common department
names include:
Evaluation
Evaluation and Learning
Research and Evaluation
Research, Evaluation,
and Learning
Learning and Impact
1 A chi-square analysis was conducted between whether or not
foundations have asset sizes greater
than the median in our sample and whether or not foundations
have a dedicated evaluation
department. A statistical difference of a medium effect size was
found. A chi-square analysis was also
conducted between whether or not foundations give more than
the median annual giving amount in
11. our sample and whether or not those foundations have a
dedicated evaluation department. Again, a
statistical difference of a medium effect size was found.
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 14 15
About half of foundations have 1.5 full-time equivalent (FTE)
staff
or more regularly dedicated to evaluation work.
About half of foundations spend $200,000 or more
on evaluation (in U.S. dollars).3
About one-quarter of
foundations spend $40,000
or less on evaluation.
About one-quarter of
foundations spend $1 million
or more on evaluation.
Thirty-five percent are quite or extremely confident in
the dollar estimate they provided.
Staff with evaluation-related responsibilities:
$200k
$40k
35%
12. $1M
direct and manage
all or most
work related to
evaluation at 45%
of foundations
provide advice and
coaching to other
foundation staff who
manage all or most work
related to evaluation at
21% of foundations
hire third parties to
direct and manage all
or most work related to
evaluation on behalf of
the foundation at 14%
of foundations
T H I R D
PA RT Y
2 An independent samples t-test indicated that foundations with
asset sizes greater than the median
foundation in our sample were more likely to have a greater
number of evaluation staff. This
13. statistical difference was of a medium effect size.
Larger foundations tend to have more staff regularly
dedicated to evaluation work.2
For every 10 program staff members, the median foundation has
about one FTE staff member regularly dedicated to evaluation
work.
EVALUATION STAFFING
MODELS OF HOW EVALUATION
RESPONSIBILITIES ARE MANAGED
3 In the survey, we did not put parameters around what
respondents should or should not include in
the dollar value they provided. Respondents were told it is
understandable that it may be difficult to
give a precise number, but to provide their best estimate.
EVALUATION SPENDING
perceive that funding levels
for evaluation work at their
foundation have stayed about
the same relative to the size of
the program budget over the
past two years
45%
perceive that funding levels
for evaluation work at
their foundation increased
relative to the size of the
program budget over the
past two years
14. 50%
Of respondents:
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 16 17
Half of respondents report
that most or all of their
grantmaking is proactive
(e.g., the foundation identifies
and requests proposals from
organizations or programs
that target specific issues or
are a good fit with foundation
initiatives and strategies).
About one-quarter of respondents
report that most or all of their
grantmaking is responsive (e.g.,
driven by unsolicited requests from
grant seekers).
50%
27%
PROACTIVE
RESPONSIVE
Evaluation
Practices
15. GRANTMAKING
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 18 19
PRIORITIZATION OF
EVALUATION ACTIVITIES
respondents spending time on the activity who say it is a top
priority
respondents spending time on the activity
Provide research or data to inform grantmaking strategy
90%
35%
Evaluate foundation initiatives or strategies
88%
51%
Develop grantmaking strategy
86%
34%
Design and/or facilitate learning processes or events
within the foundation
79%
27%
16. Compile and/or monitor metrics to measure
foundation performance
71%
33%
Evaluate individual grants
71%
34%
Design and/or facilitate learning processes or events with
grantees
or other external stakeholders
70%
15%
Improve grantee capacity for data collection or evaluation
69%
14%
Conduct/commission satisfaction/perception surveys (of
grantees
or other stakeholders)
7%
60%
Disseminate evaluation findings externally
9%
57%
17. Refine grantmaking strategy during implementation
87%
26%
Half of respondents report
spending time on at least nine
of these activities.
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 20 21
CHALLENGES
INVESTMENT IN EVALUATION
ACTIVITIES
Percentage of respondents who say
the following practices have been at
least somewhat challenging in their
foundation’s evaluation efforts4
Percentage of respondents who say
their foundation invests too little in the
following evaluation activities
Having evaluations result in meaningful insights for the
foundation
76%
Incorporating evaluation results into the way the foundation
will
approach its work in the future
18. 70%
76%
Having evaluations result in useful lessons for grantees
82%
4 Respondents were asked to rate how challenging each of the
practices has been to their
foundation’s evaluation efforts on a 1-5 scale, where 1 = ‘Not at
all challenging,” 2 = ‘Not very
challenging,” 3 = ‘Somewhat challenging,’ 4 = ‘Quite
challenging,’ and 5 = ‘Extremely challenging.’
The percentages included above represent respondents who
rated a 3, 4, or 5 on an item.
Identifying third party evaluators that produce high quality
work
59%
Having evaluations result in useful lessons for the field
83%Disseminating evaluation findings externally
71%
Improving grantee capacity for data collection or evaluation
69%
Designing and/or facilitating learning processes or events with
grantees
or other external stakeholders
58%
Compiling and/or monitoring metrics to measure foundation
performance
55%
19. Designing and/or facilitating learning processes or events
within the foundation
48%
Evaluating foundation initiatives or strategies
44%
Providing research or data to inform grantmaking strategy
42%
Refining grantmaking strategy during implementation
39%
Developing grantmaking strategy
26%
Evaluating individual grants
22%
Conducting/commissioning satisfaction/perception surveys (of
grantees
or other stakeholders)
41%
Allocating sufficient monetary resources for evaluation efforts
63%
Having foundation staff and grantees agree on the goals of the
evaluation
36%
Having programmatic staff and third party evaluators agree on
the
goals of the evaluation
20. 31%
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 22 23
MOST COMMON APPROACHES
TO FUNDING GRANTEES'
EVALUATION EFFORTS
of foundations have no common
approach to evaluating grants
because funding evaluation efforts
differs widely across the foundation’s
program or strategy areas
41%
of respondents report that
grantees can spend a portion of
their grant dollars on evaluation
if they request to do so
19%
of respondents report that
grantees receive general
operating support dollars, and
they can choose to use these
dollars for evaluation5
10%
of respondents say the
foundation commissions outside
evaluators to evaluate individual
21. grantees’ work
12%
5 All other response options for this item were selected by
fewer than 10 percent of respondents and
not shown here.
Percentage of individual grants funded for evaluation
Almost two-thirds of respondents say their
foundations fund evaluations for less than 10
percent of individual grants.
none less than 10% 10% to 25%
26%
to
50%
51%
to
75%
more
than
75%
63%
13%
found it quite or extremely
useful in providing evidence for
the field about what does and
does not work
22. found it quite or extremely
useful in future grantmaking
decisions
found it quite or extremely
useful in understanding the
impact the foundation’s grant
dollars are making
found it quite or extremely
useful in refining foundation
strategies or initiatives
Of those who have provided funding for a randomized control
trial:
63%
38%
42%
25%
RANDOMIZED
CONTROL TRIALS
About one-fifth of respondents say their foundations
have provided funding for a randomized control trial of
their grantees’ work in the past three years.
19%
23. | BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 24 25
TYPES OF EVALUATION
Over 40 percent of respondents say their
foundation has engaged in efforts to
coordinate its evaluation work with other
funders working in the same issue areas.
Yes, we are already
engaged in such efforts 28% 6% 24%
42%
Evaluation Type Grantees’ Work
Regularly Occasionally Never
Summative 20% 52% 28%
Formative 15% 53% 31%
Developmental 10% 46% 44%
Evaluation Type Foundation Initiatives or Strategies
Regularly Occasionally Never
Summative 25% 53% 22%
Formative 20% 57% 23%
Developmental 22% 36% 42%
No, but we are currently considering such
efforts
No, we considered it but concluded it was
not right for us
No, we have not considered engaging in
24. any such efforts
COLLABORATION
Using Evaluation
Information
Frequency with which different types of evaluations
are conducted on grantees’ work and foundation
initiatives or strategies
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 26 27
Grantee organizations it seeks to affect
46%
Fields it seeks to affect
35%
Communities it seeks to affect
22%
Ultimate beneficiaries it seeks to affect
20%
UNDERSTANDING CHALLENGES
Percentage of respondents who say
each of the following is a challenge
for program staff’s use of information
collected through, or resulting from,
evaluation work
25. Program staff's attitudes toward evaluation
50%
Program staff's lack of involvement in shaping the evaluations
conducted
40%
Program staff's level of comfort in interpreting/using data
71%
Program staff's time
91%
Percentage of respondents who
believe their foundation understands
quite or very accurately what it has
accomplished through its work, when
it comes to each of the following
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 28 29
Percentage of respondents who
say program staff are likely to use
information collected through, or
resulting from, evaluations to inform
the following aspects of their work
When a respondent says the foundation's senior management
engages less
than the appropriate amount in evaluation, the foundation is
significantly
more likely to experience the following evaluation challenges:
26. USE OF INFORMATION
Decide whether to adjust grantmaking strategies during
implementation
74%
Decide whether to renew grantees’ funding
71%
76%
Decide whether to expand into new program areas or exit
program areas
76%
Hold grantees accountable to the goals of their grants
57%
Understand what the foundation has accomplished through its
work
80%
Strengthen grantee organizations’ future performance
63%
Communicate publicly about what the foundation has learned
through
its work
56%
Decide whether to award a first grant
40%
LEVEL OF ENGAGEMENT OF
SENIOR MANAGEMENT
27. LESS THAN HALF of respondents say senior management
engages the appropriate
amount in supporting adequate investment in the evaluation
capacity of grantees.
1%
44%39%16%
LESS THAN HALF of respondents say senior management
engages the
appropriate amount in considering the results of evaluation
work as an
important criterion when assessing staff.
43%31%26%
ABOUT HALF of respondents say senior management engages
the appropriate
amount in modeling the use of information resulting from
evaluation work in
decision making.
52%39%9%
OVER TWO-THIRDS of respondents say senior management
engages the
appropriate amount in communicating to staff that it values the
use of
evaluation and evaluative information.
2%
68%24%6%
No engagement
28. Appropriate amount of engagement
Too little engagement
Too much engagement
► Allocating sufficient monetary resources for evaluation
efforts
► Incorporating evaluation results into future work
► Having evaluations result in useful lessons for the field
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 30 31
LEVEL OF SUPPORT FROM BOARD
ALMOST 40 PERCENT of respondents say there is a high level
of board support
for the use of evaluation or evaluative data in board-level
decision making.
1%
38%39%22%
ABOUT HALF of respondents say there is a high level of board
support for the use
of evaluation or evaluative data in decision making by staff at
the foundation.
2%
52%40%6%
29. FORTY PERCENT of respondents say there is a high level of
board support for
the role of evaluation staff at the foundation.
40%39%14%7%
No support
Moderate support
Little support
High support
ONLY ONE-THIRD of respondents say there is a high level of
board support for
foundation spending on evaluation.
3% 34%44%19%
When a foundation's board is less supportive of evaluation, the
foundation is
significantly more likely to experience the following evaluation
challenges:
► Allocating sufficient monetary resources for evaluation
efforts
► Having evaluations result in meaningful insights
► Incorporating evaluation results into future work
► Having foundation staff and grantees agree on evaluation
goals
► Having evaluations result in useful lessons for grantees
► Having evaluations result in useful lessons for the field
Percentage of respondents who say
30. evaluation findings are shared with the
following audiences quite a bit or a lot
SHARING INFORMATION
Foundation’s grantees
28%
Foundation’s CEO
77%
Other foundations
17%
Foundation’s board
47%
Foundation’s staff
66%
General public
14%
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 32 33
Looking
Forward Foundations will be more strategic
in the way they plan for and design
evaluations so that information
collected is meaningful and useful.
31. THE TOP THREE CHANGES EVALUATION
STAFF HOPE TO SEE IN FIVE YEARS⁶
Implement
more strategic
evaluation
designs to
measure
initiatives and
key areas of
investment.
Develop clear strategies
and goals for what [the
foundation] hopes to
measure and assess.
My sole wish is that evaluation
data is meaningful−that it is
actually linked to strategy.
1
6 Of evaluation staff who responded to our survey, 74 percent,
or 94 of 127 respondents, answered
the open-ended question, “In five years, what do you hope will
have changed for foundations in the
collection and/or use of evaluation data or information?”
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 34 35
2
Use evaluation deliverables to
32. inform decisions that improve our
foundation and grantee performance.
More and more effective
use of evaluative data
and information for the
purpose of learning
and improvement for
foundations.
I would like to see the full integration
of evaluation into foundation daily
practice and routine decision making.
Foundations will use evaluation
data for decision-making and
improving practice.
More public
sharing both
internally and
externally and
more frank
conversation
about what
worked or
didn’t work.
I want to expand
our ability to share
information to inform
the fields in which we
work and to inform our
audiences, such as donors
33. and policymakers.
To improve the level of transparency
surrounding evaluation, less emphasis on
perfection and more on discovery.
3
Foundations will be more
transparent about their
evaluations and share what they
are learning externally.
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 36 37
1. What is the purpose of evaluation at your foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
34. How do your foundation’s evaluation efforts align with its goals
and strategies, if at all?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
How does leadership at your foundation use information from
the foundation’s
evaluation work, if at all?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
How do your foundation’s evaluation efforts align, or not align,
with its organizational culture?
_____________________________________________________
35. ___________
_____________________________________________________
___________
_____________________________________________________
___________
2. How does your foundation make decisions about each of the
following:
How much to budget for evaluation work?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
DISCUSSION QUESTIONS
Which costs will be categorized as evaluation costs (e.g.,
salaries of staff with
evaluation responsibilities, third party evaluators, data
37. ___________
How many staff have evaluation-related responsibilities at your
foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
What are the evaluation-related job responsibilities of these
staff members? On what do
they spend their time?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
In which department or area do staff with evaluation-related
responsibilities work, and why?
39. the Canada
Revenue Agency, with help from Philanthropic Foundations
Canada;
• or, were members of the Center for Evaluation Innovation’s
(CEI) Evaluation
Roundtable.
For foundations that were members of CEI’s Evaluation
Roundtable, the
foundation’s representative to the Roundtable was included in
the sample. For
all other foundations, the following criteria were used to
determine the most
senior person at the foundation who was most likely to have
evaluation-related
responsibilities:
An individual was deemed to be evaluation staff if his/her title
included one or
more of the following words, according to the foundation’s
website:
1. Evaluation 2. Assessment 3. Research
4. Measurement 5. Effectiveness 6. Knowledge
7. Learning 8. Impact 9. Strategy
10. Planning 11. Performance 12. Analysis
To determine which evaluation staff member at a foundation
was the most
senior, the following role hierarchy was used:
1. Senior Vice President 2. Vice President 3. Director
40. 4. Deputy Director 5. Senior Manager 6. Manager
7. Senior Officer 8. Officer 9. Associate
If no staff on a foundation’s website had titles or roles that
included the above
words related to evaluation, the most senior program staff
member at the
foundation was chosen for inclusion in the sample. Program
staff were identified
as having titles that included the words “Program” or “Grant,”
or mentioned
a specific program area (e.g., “Education” or “Environment”).
The same role
hierarchy described above was used to determine seniority.
4. How, if at all, does your foundation use information from its
evaluation
work to inform programmatic decisions?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
41. 5. How are decisions made about with whom evaluation
information will
be shared:
Inside the foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
Outside of the foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
42. _____________________________________________________
___________
6. What changes would you like to see regarding evaluation at
your
foundation?
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
_____________________________________________________
___________
What would you hope would happen as a result of these
changes?
_____________________________________________________
___________
_____________________________________________________
___________
43. _____________________________________________________
___________
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES CEP • CEI | 40 41
independent foundations, 13 percent were health conversion
foundations.
The final eight percent of foundations in our sample included
other types of
funders that were part of the Evaluation Roundtable, aside from
independent or
community foundations.
The median asset size for foundations in the sample was about
$530 million
and the median annual giving level was about $28 million. The
median number
of full-time equivalent staff working at foundations in this
study was 25. The
number of full-time equivalent staff is based on information
purchased from
Foundation Center in September 2014.
QUANTITATIVE ANALYSIS
To analyze the quantitative survey data from foundation leaders,
descriptive
statistics were examined. Chi-square analyses and independent
samples t-tests
were also conducted to examine the relationship between
foundation size and
evaluation structure. An alpha level of 0.05 was used to
44. determine statistical
significance for all testing, and effect sizes were examined for
all analyses.
Because our sample only consisted of 32 community
foundations, we were
unable to rigorously explore statistical differences between
independent and
community foundations in this study.
QUALITATIVE ANALYSIS
Thematic and content analyses were conducted on the responses
to the
open-ended question, “In five years, what do you hope will have
changed for
foundations in the collection and/or use of evaluation data or
information?” A
coding scheme was developed for this item by reading through
all responses
to recognize recurring ideas, creating categories, and then
coding each
respondent’s ideas according to the categories.
A codebook was created to ensure that different coders would
be coding for the
same concepts rather than their individual interpretations of the
concepts. One
coder coded all responses to the question and a second coder
coded 15 percent
of those responses. At least an 80 percent level of inter-rater
agreement was
achieved for each code.
Selected quotations were included in this publication. These
quotations were
45. selected to be representative of the themes seen in the data.
Only those individuals who had an e-mail address that could be
accessed through
the foundation’s website, CEP staff knowledge, or CEI staff
knowledge were
deemed eligible to receive the survey.
In September 2015, 271 foundation staff were initially sent an
invitation to
complete the survey. Two new members of the Evaluation
Roundtable were later
added to the sample and sent the survey. Later, 19 individuals
were removed from
the sample because they did not meet the inclusion criteria.
Completed surveys
were received from 120 staff members, and partially completed
surveys, defined as
being at least 50 percent complete, were received from seven
staff members.
Thus, our final sample of respondents included 127 of the 254
potential
respondents, for a response rate of 50 percent. Of the
foundation staff who
responded to the survey, 58 percent were evaluation staff, 35
percent were
program staff, and six percent were staff with a title that did not
fall into either of
these two categories, based on our previously defined criteria.
METHOD
The survey was fielded online during a four week period from
September to
October of 2015. Foundation staff with evaluation-related
46. responsibilities
were sent a brief e-mail including a description of the purpose
of the survey, a
statement of confidentiality, and a link to the survey. These
staff were sent up to
nine reminder e-mails and received up to one reminder phone
call.
The survey consisted of 43 items, some of which contained
several sub-items.
Respondents were asked about a variety of topics, including
their role at
their foundation and previous experience, their foundation and
its evaluation
function, their foundation’s specific evaluation practices, and
the ways in which
information collected through evaluations is used.
RESPONSE BIAS
Foundations with staff who responded to this survey did not
differ from non-
respondent organizations by annual asset size, annual giving
amount, region
of the United States in which the foundation is located, or
whether or not the
foundation is an independent foundation. Information on assets
and giving
was purchased from Foundation Center in September 2014.
Evaluation staff
of foundations that are part of CEI’s Evaluation Roundtable
were more likely to
respond to the survey than evaluation staff of foundations that
are not part of
CEI’s Evaluation Roundtable.
47. SAMPLE DEMOGRAPHICS
Sixty-seven percent of the foundations represented in our final
sample were
independent foundations and 25 percent were community
foundations. Of the
| BENCHMARKING FOUNDATION EVALUATION
PRACTICES42
www.effectivephilanthropy.org
www.evaluationinnovation.org
CEP FUNDERS
We are very appreciative of the support that made this work
possible.
See below for a list of funders.
The Leona M. and Harry B. Helmsley
Charitable Trust
The McKnight Foundation
New Hampshire Charitable Foundation
New York State Health Foundation
Oak Foundation
Public Welfare Foundation
Richard M. Fairbanks Foundation
Saint Luke’s Foundation
Sobrato Family Foundation
Teagle Foundation
Weingart Foundation
Wilburforce Foundation
William Penn Foundation
48. INDIVIDUAL CONTRIBUTORS
Michael Bailin
Kevin Bolduc
Phil Buchanan
Alyse d’Amico
John Davidson
Robert Eckardt
Phil Giudice
Tiffany Cooper Gueye
Crystal Hayling
Paul Heggarty
Bob Hughes
Barbara Kibbe
Latia King
Patricia Kozu
Kathryn E. Merchant
Grace Nicolette
Richard Ober
Alex Ocasio
Grant Oliphant
Hilary Pennington
Christy Pichel
Nadya K. Shmavonian
Fay Twersky
Jen Vorse Wilka
Lynn Perry Wooten
$500,000 OR MORE
Fund for Shared Insight
Robert Wood Johnson Foundation
The Rockefeller Foundation
The William and Flora Hewlett Foundation
$200,000 TO $499,999
The David and Lucile Packard Foundation
Ford Foundation
49. W.K. Kellogg Foundation
$100,000 TO $199,999
Barr Foundation
The James Irvine Foundation
The Kresge Foundation
Rockefeller Brothers Fund
S.D. Bechtel, Jr. Foundation
$50,000 TO $99,999
Gordon and Betty Moore Foundation
The Wallace Foundation
$20,000 TO $49,999
Carnegie Corporation of New York
Charles Stewart Mott Foundation
The Duke Endowment
John D. and Catherine T. MacArthur Foundation
Lumina Foundation
Surdna Foundation
W. Clement and Jessie V. Stone Foundation
UP TO $19,999
The Assisi Foundation of Memphis
California HealthCare Foundation
The Colorado Health Foundation
The Columbus Foundation
The Commonwealth Fund
Doris Duke Charitable Foundation
Evelyn and Walter Haas, Jr. Fund
The Heinz Endowments
Henry Luce Foundation
Houston Endowment
Kansas Health Foundation
50. | BENCHMARKING FOUNDATION EVALUATION
PRACTICES44
675 Massachusetts Avenue
7th Floor
Cambridge, MA 02139
(617) 492-0800
131 Steuart Street
#501
San Francisco, CA 94105
(415) 391-3070
1625 K Street, NW
Suite 1050
Washington, DC 20006
(202) 728-0727 ext.116
www.effectivephilanthropy.org
www.evaluationinnovation.org
A RESEARCH STUDY BY:
STATE OF
EVALUATION
2016
Evaluation Practice and Capacity
in the Nonprofit Sector
51. #SOE2016
We appreciate the support of the David and Lucile Packard
Foundation, the Robert Wood
Johnson Foundation, and the Bruner Foundation for funding this
research. The findings and
conclusions presented in this report are those of the authors
alone and do not necessarily
reflect the opinions of our funders. This project was informed
by seven advisors: Diana
Scearce, Ehren Reed, Lester Baxter, Lori Nascimento, Kelci
Price, Michelle M. Doty, and J
McCray. Thank you for your time and suggestions. The authors
acknowledge Innovation
Network staff Natalia Goldin, Elidé Flores-Medel, and Laura
Lehman for their contributions
to this report and Stephanie Darby for the many ways she makes
our work possible, and
consultant Erin Keely O’Brien for her indispensible survey
design advice. We also thank
former Innovation Network staff Katherine Haugh, Yuqi Wang,
and Smriti Bajracharya for
their contributions to project design, and our evaluation
colleagues and former State of
Evaluation authors Ehren Reed and Ann K. Emery. And finally,
we recognize the 1,125 staff
of nonprofit organizations throughout the United States who
took the time to thoughtfully
complete the State of Evaluation 2016 survey.
stateofevaluation.org
52. We’re thrilled that you are interested in the state of nonprofit
evaluation. We’re more than passionate
about the topic—strengthening nonprofit evaluation capacity
and practice is the driving force that
animates our work, research, and organization mission:
For more than 20 years, Innovation Network staff have worked
with nonprofits and funders to design,
conduct, and build capacity for evaluation. In 2010 we kicked
off the State of Evaluation project
and reports from 2010 and 2012 build a knowledge base about
nonprofit evaluation: how many
nonprofits were doing evaluation, what they were doing, and
how they were doing it. The State of
Evaluation project is the first nationwide project that
systematically and repeatedly collects data from
nonprofits about their evaluation practices.
In 2016 we received survey responses from 1,125 US-based
501(c)3 organizations. State of Evaluation
2016 contains exciting updates, three special topics (evaluation
storage systems and software, big
data, and pay for performance), and illuminating findings for
everyone who works with nonprofit
organizations.
As evaluators, we’re encouraged that 92% of nonprofit
organizations engaged in evaluation in
the past year—an all-time high since we began the survey in
2010. And evaluation capacity held
steady since 2012: both then and now 28% of nonprofit
organizations exhibit a promising mix of
characteristics and practices that makes them more likely to
meaningfully engage in and benefit from
evaluation. Funding for evaluation is also moving in the right
53. direction, with more nonprofits receiving
funding for evaluation from at least one source than in the past
(in 2016, 92% of organizations that
engaged in evaluation received funding from at least one
source, compared to 66% in 2012).
But for all those advances, some persistent challenges remain.
In most organizations, pivotal resources
of money and staff time for evaluation are severely limited or
nonexistent. In 2016, only 12% of
nonprofit organizations spent 5% or more of their organization
budgets on evaluation, and only
8% of organizations had evaluation staff (internal or external),
representing sizeable decreases.
Join us in being part of the solution: support increases of
resources for evaluation (both time and
money), plan and embed evaluation alongside programs and
initiatives, and prioritize data in
decision making. We’re more committed than ever to building
nonprofit evaluation capacity, and we
hope you are too.
Measure results. Make informed decisions. Create lasting
change.
HELLO!
Innovation Network is a nonprofit evaluation, research, and
consulting firm. We
provide knowledge and expertise to help nonprofits and funders
learn from their
work to improve their results.
Johanna Morariu Veena Pankaj
54. Kat Athanasiades Deborah Grodzicki
1
#SOE2016
State of evaluation 2016
Evaluation is the tool that enables nonprofit organizations to
define success and measure results in order to
create lasting change. Across the social sector, evaluation takes
many forms. Some nonprofit organizations
conduct randomized control trial studies, and other nonprofits
use qualitative designs such as case studies.
Since this project launched in 2010, we have seen a slow but
positive trend of an increasing percentage
of nonprofit organizations engaging in evaluation. In 2016, 92%
of nonprofit organizations engaged in
evaluation.
Promising Capacity
Evaluation capacity is the extent to which an organization has
the culture, expertise, and resources to continually
engage in evaluation (i.e., evaluation as a continuous lifecycle
of assessment, learning, and improvement).
Criteria for adequate evaluation capacity may differ across
organizations. We suggest that promising
evaluation capacity is comprised of some internal evaluation
capacity, the existence of some foundational
evaluation tools, and a practice of at least annually engaging in
evaluation. Applying these criteria, 28% of
nonprofit organizations exhibit promising evaluation capacity
(exactly the same percentage as in 2012).
56. similar documents updated
them within the past year
of those had a logic model or similar document
of those had medium or high internal capacitydid not
evaluate
their
work or
did not
know
if their
organi-
zation
eval-
uated its
work
evaluated their work
2
stateofevaluation.org
The nonprofit evaluation report card is a round-up of key
indicators about resources, behaviors, and
culture. Comparing 2016 to 2012, what’s gotten better, worse,
or stayed the same?
Nonprofit Evaluation Report Card
92%
92%
57. 85%
28%
58%
12%
8%
of nonprofit organizations engaged in evaluation, compared to
90%
of nonprofits in 2012
of nonprofit organizations received funding for evaluation from
at least one source, compared to 66% of nonprofits in 2012
of nonprofit organizations agree that they need evaluation
to know that their approach is working, compared to 68% of
nonprofits in 2012
of nonprofit organizations exhibited promising evaluation
capacities, compared to 28% of nonprofits in 2012
of nonprofit organizations had a logic model or theory of
change, compared to 60% of nonprofits in 2012
of nonprofit organizations spent 5% or more of their budgets on
evaluation, compared to 27% of nonprofits in 2012
of nonprofit organizations have evaluation staff primarily
responsible
for evaluation, compared to 25% of nonprofits in 2012
3
58. #SOE2016
Purpose & Resourcing
Nonprofit organizations conduct evaluation for a variety of
reasons. Evaluation can help nonprofits better
understand the work they are doing and inform strategic
adjustments to programming. In 2016, nonprofits
continue to prioritize evaluation for the purpose of
strengthening future work (95% of respondents identified
it as a priority), learning whether objectives were achieved
(94%), and learning about outcomes (91%). [n=877]
Funding for Evaluation
In 2016, 92% of organizations identified at least one source of
funding for evaluation and 68% of organizations
that received funding identified foundations and philanthropy as
the top funding source for evaluation.
[n=775]
Priorities for Conducting Evaluation
Strengthen
future work
Learn about
outcomes
Contribute
to
knowledge
in the field
60. Moderate
priority
39%
30%
68% 65%
51% 48%
34%
Foundation or
Philanthropic
Contributions
Individual
Donor
Contributions
Corporate
Charitable
Conbritutions
Government
Grants or
Contracts
Dues, Fees,
or Other
Direct
Charges
Essential
62. 0%
2010 2012
2016
A majority of organizations spend less than the recommended
amount on evaluation. Based on our experience
working in the social sector, nonprofit organizations should be
allocating 5% to 10% of organization budgets
to evaluation. In 2016, 84% of the organizations spent less than
the recommended amount on evaluation.
[n=740]
Organizations spent less on evaluation than in prior years. The
gap between how much an organization should
spend on evaluation and what they actually spend has increased.
[n=775]
Budgeting for Evaluation
Percent of Annual Budget Spent on Evaluation
0%
2%
Less than
2-5%
5-10%
10%or more 4% of organizations
8% of organizations
25% of organizations
63. 43% of organizations
16% of organizations
84%
of organizations
spend less than
5% on evaluation
The percentage of organizations that spend
less than 5% of the organization budget on
evaluation has increased from 76% in 2010 to
84% in 2016.
The percentage of organizations that spend
no money on evaluation has increased from
12% in 2010 to 16% in 2016.
The percentage of organizations that spend
5% or more of the organization budget has
declined from 23% in 2010 to 12% in 2016.
84%
73%76%
12%
27%23%
16%
7%
12%
5
64. #SOE2016
EVALUATION DESIGN & PRACTICE
Across the sector, nonprofits are more likely to use quantitative
data collection methods. Large organizations
(organizations with annual budgets of $5 million or more)
continue to engage in a higher percentage of
both quantitative and qualitative data collection methods than
small organizations (annual budgets of less
than $500,000) and medium-sized organizations (annual budgets
of $500,000 to $4.99 million).
There are a variety of evaluation approaches available to
nonprofit organizations. In 2016, over 90% of
organizations have selected approaches to understand the
difference they are making through their work
(outcomes evaluation), how well they are implementing their
programs (process/implementation evaluation),
and to assess overall performance accountability (performance
measurement).
Common Evaluation Designs
91% 86% 85%
77% 66% 53%
Outcomes
Evaluation
Performance
Measurement
Process/
66. 79% 63
%
63% 61
%
94% 50
%
80% 72
%
72% 71
%
78% 21
%
66% 60
%
57% 54
%
QUANTITATIVE PRACTICES QUALITATIVE PRACTICES
Small
organizations
Small
organizations
Medium
organizations
Medium
67. organizations
Large
organizations
Large
organizations
Data Collection Methods
For this analysis, n values for small organizations ranged from
235 to 241, n values for medium organizations ranged from 370
to 380, and n values
for large organizations ranged from 110 to 112.
6
stateofevaluation.org
100%
90%
80%
70%
60%
50%
40%
30%
68. 20%
10%
0%
2010 2012
2016
Logic models and theories of change are common tools used by
organizations to guide evaluation design
and practice. They demonstrate the connections between the
work being done by an organization and the
types of changes programs are designed to achieve. Having a
logic model or theory of change is seen as a
positive organizational attribute and demonstrates an
organization’s ability to plan for an evaluation.
Since 2010, more large organizations have created or revised
their logic model/theory of change at least
annually, in comparison to small organizations.
Evaluation Planning
In 2016, 56% of large organizations
created or revised their logic model
or theory of change within the past
year (up from 45% in 2012, and on
par with 2010). [2016: n=112]
In 2016, 35% of small organizations
created or revised their logic model
or theory of change within the past
year (up from 30% in 2012, and on
par with 2010). [2016: n=242]
69. 56%
34% 30% 35
%
56%
45%
58%
44%
All organizations
[n=812]
Organizations
that have a logic
model/theory of
change
Organizations
that have
created or
revised a logic
model/theory
of change in the
past year
Organizations that have a
logic model/theory of change
are more likely to:
• agree that evaluation is
needed to know that an
approach is working
70. • have higher organizational
budgets
• have more supports for
evaluation
• have foundation or
philanthropic charitable
contributions support
their evaluation
7
#SOE2016
Staffing
Evaluation staffing varies dramatically among nonprofit
organizations. One in five large organizations and
almost one in ten medium organizations have evaluation staff.
Not surprisingly, small organizations are
very unlikely to have evaluation staff. While this difference in
staff resources is intuitive, organizations without
evaluation staff are still frequently expected to collect data,
measure their results, and report data to internal
and external audiences. Furthermore, organizations with
evaluation staff were found to be more likely to
exhibit the mix of characteristics and behaviors of promising
evaluation capacity.
Who is primarily responsible for conducting evaluation work
within an organization varies dramatically. Of
organizations that evaluate, most nonprofit organizations (63%)
71. report that the executive leadership or
program staff were primarily responsible for conducting
evaluations. Only 6% of nonprofit organizations
report having internal evaluation staff, and 2% indicate that
evaluation work was led by an external evaluator.
[n=786]
Board of
Directors
Executive
Leadership
Program Administrative
Only 6% of
organizations have
internal evaluators
Evaluation Fundraising &
Development
Financial
4%
34%
29%
8%
6% 5%
1%
63% of organizations have executives and
72. program staff primarily responsible for evaluation
Less than1%of organizations do not have someone responsible
for conducting evaluations
Large and medium
organizations were significantly
more likely to have evaluation
staff (20% and 8%, respectively),
compared to small
organizations (2%), p < .001.
Significantly more organizations
with evaluation staff reported
having promising evaluation
capacity (60%), compared to
organizations without evaluation
staff (36%), p < .001.
ORGANIZATION SIZE ORGANIZATION CAPACITY
Organization Size, Staffing, and Capacity
8
stateofevaluation.org
Looking beyond who is primarily responsible for conducting
evaluation work, a larger proportion of nonprofits
work with external evaluators in some way: in 2016, 27% of
nonprofit organizations worked with an external
73. evaluator. Similar to 2010 and 2012, representatives of
nonprofit organizations that worked with an external
evaluator were strongly positive about their experience. [n=208]
Organization size was associated with the likelihood of working
with an external evaluator. Almost half of
large organizations worked with an external evaluator (49%)
compared to 29% of medium organizations
and 14% of small organizations.
External Evaluators
Funding source was also associated with
likelihood of working with an external
evaluator. Organizations with philanthropy
as a funding source were significantly
more likely to have worked with external
evaluators (30%) compared to organizations
that did not have philanthropy as a funding
source (20%), p=.004.
Satisfaction with working
with external evaluators
does not differ by
organization size
49%
29
%
14 %
Small
organizations
75. staff (CEO or ED) were the primary audience
for evaluation, a 10% increase from 2012. Over three-quarters
of nonprofit organizations also identified the
Board of Directors as a primary audience. [n=842]
Similar to previous years, most nonprofit organizations use
evaluation to measure outcomes, answering the
question of What difference did we make? Almost as many
nonprofit organizations focus on measuring
quality and quantity, answering the questions of How well did
we do? and How much did we do? This finding
highlights the importance nonprofit organizations place on
measuring quality and quantity in addition to
outcomes. [n=953]
% NOT AN AUDIENCE % PRIMARY AUDIENCE %
SECONDARY AUDIENCE
3% 85%
76%
70%
40%
29%
23%
8% 52%
37%
37%
77. Organizations
How well did we do?
What difference did we
make?
How much did we do?
90.6%
91.3%
89.6%
Evaluation Focus
10
stateofevaluation.org
Organizations use evaluation results for internal and external
purposes, with 94% of nonprofit organizations
using results to report to the Board of Directors (internal) and
93% of nonprofit organizations using results
to report to funders (external). [n=869]
Nonprofit organizations were asked to indicate the extent to
which they use evaluation results to support new
initiatives, compare organization performance to a specific goal
or benchmark, and/or assess organization
performance without comparing results to a specific goal or
benchmark. Over 85% of organizations regularly
use evaluation results to support the development of new
78. initiatives, demonstrating a growing number of
organizations using evaluation to inform and develop their
work. [n=851]
52%
91%
57%
94%
82%
93%
86%
73%
Plan/Revise
Program
Initiatives
Advocate
for a Cause
Share
Findings with
Peers
Reporting
to Board of
Directors
Report to
Stakeholders
80. #SOE2016
Barriers & Supports
Time, staff, money, knowledge—these and other factors can
heavily influence whether organizations conduct
evaluation. Support from organizational leadership, sufficient
staff knowledge, skills, and/or tools, and an
organization culture in support of evaluation were far and away
the most helpful organizational supports for
evaluation.
Limited staff time, insufficient financial resources, and limited
staff expertise in evaluation continue to be the
top three barriers organizations face in evaluating their work.
These are the same top three challenges named
in State of Evaluation 2012 and 2010. Having staff who did not
believe in the importance of evaluation has
become a much greater barrier than in the past: more than twice
the percentage of organizations now report
this as a challenge to carrying out evaluation (2012: 12%).
BARRIERS
[n=830]
SUPPORTS
[n=777]
Insufficient support
from organization
leadership
Limited staff
knowledge, skills,
and/or tools
81. Having staff who
did not believe in
the importance of
evaluation
Limited staff time
Disagreements
related to data
between your
organization and a
funder
Insufficient financial
resources
Not knowing where
or how to find an
external evaluator
Stakeholders being
resistant to data
collection
13% 77%
69%
67%
50%
36%
34%
83. evaluation
tasks
Funder
support for
evaluation
Sufficient
funding
Working with
an external
evaluator
12
stateofevaluation.org
Most organizations view evaluation favorably. It is a tool that
gives organizations the confidence to
say whether their approach works. However, a little under half
feel too pressured to measure their
results. [n=901]
Of organizations that evaluate their work, 83% talk with their
funders about evaluation. This dialogue
with funders is generally deemed useful by nonprofits. [n values
range from 661 to 800]
Communication and Reporting
Our organization needs
84. evaluation to know that our
approach is working
Data collection interfered with
our relationships with clients
Most data that is collected in
our organization is not used
There is too much pressure on
our organization to measure
our results
85%
73%
69%
57%
AGREE
DISAGREE
DISAGREE
DISAGREE
of funders are
accepting of failure
as an opportunity for
learning
85. of nonprofits find
discussing findings
with funders useful
of nonprofits’ funders
are open to including
evaluation results in
decisionmaking
49%
32% 15%
44% 42% 54%
often
never ignore
often
sometimes sometimes sometimes
93% 74% 69%
13
#SOE2016
Big Data
Data Trends
Big data is a collection of data, or a dataset, that is very large.
86. A classic example of big data is US Census data.
Using US Census data, staff of a nonprofit organization that
provided employment services could learn more
about employment trends in their community, or monitor
changes in employment over time. The ability to
work with big data is important, as it enables nonprofits to learn
about trends and better target services and
other interventions. Thirty-eight percent of organizations are
currently using big data, and an additional 9%
of organizations report having the ability to use big data but are
not currently doing so. [n=820]
Big Data: What It Takes
Organizations that used big data had more supports for
evaluation and larger budgets.
9%
38%41
%
of nonprofits do not have the ability to use
big data
of nonprofits are
currently using big
data
of nonprofits have the
ability to use big data
but are not currently
using it
Organizations that use big data had more
supports for evaluation (4.06 supports)
87. than those that did not use big data (3.23
supports), p<.001.
Organizations that use big data had
larger budgets (budgets of approximately
$2-2.49 million, mean=5.56) than those
that did not use big data (budgets
of approximately $1.5-1.99 million,
mean=4.13), p=.003.
mean=3.23
Do not
use big data
Do not
use big data
mean=4.06
Use big data Use big data
mean=4.13
mean=5.56
$2.0M -
2.49M avg
budget $1.5-1.99M
avg
budget
14
88. stateofevaluation.org
Pay for performance (PFP) is when a nonprofit is compensated
based on a certain outcome being achieved.
Pay for performance funding requires a nonprofit to track
information over time about participant outcomes.
Often this tracking will need to continue after the program has
concluded. For example, in a job training
program, participant data would likely need to be tracked for
months or years after program participation
to be able to report on outcomes such as employment or
continued reemployment. Pay for performance
arrangements have grown in popularity as a way to reward
service providers for affecting change (compared
to just providing services). Currently, 13% of organizations
receive pay for performance funding, and an
additional 35% of organizations report having the ability to
meet pay for performance data requirements
(but are not currently receiving pay for performance funding).
[n=833]
Pay for Performance: What It Takes
35%
13%
48%
of nonprofits
have the ability
or currently
receive funding
through pay for
performance
arrangements
89. of nonprofits receive PFP funding
of nonprofits have the ability but don’t receive
PFP funding
Organizations that received pay for performance funding had
more supports for evaluation and larger
budgets.
43% of nonprofits do not have the ability to collect data
sufficient to participate in PFP funding
Organizations that receive pay for
performance (PFP) funding had more
supports for evaluation (4.26 supports)
than those that did not receive PFP funding
(3.50 supports), p<.001.
Organizations that receive pay for performance
(PFP) funding had larger budgets (budgets of
approximately $3.5-3.99 million, mean=8.34)
than those that did not receive PFP funding
(budgets of approximately $1.5-1.99 million,
mean=4.14), p<.001.
mean=3.50
Do not receive
PFP
Do not receive
PFP
mean=4.26
Receive PFP Receive PFP
90. mean=4.14
mean=8.34
$3.5M –
3.99M
avg budget
$1.5M –
1.99M
avg
budget
Pay for Performance
15
#SOE2016
Data storage systems & Software
Nonprofits store their evaluation data in many different ways,
both low- and high-tech. Overall, 92% of
nonprofit organizations that engaged in evaluation in 2016
stored evaluation data in some type of storage
system. [n=794]
In most organizations, data that can be used for evaluation will
come from multiple places. Top data storage
methods include spreadsheet software (92%), word processing
software (82%), and paper copies/hard copies
(82%). Less frequently used methods include survey software
(60%) and database solutions (58%). [n=727]
91. 55% of nonprofit
organizations are using 4 or
more data storage methods.
Multiple methods likely
represent multiple data
collection methods and data
types, which could indicate
a well-rounded evaluation
approach. Having multiple
data storage methods makes
data analysis and tracking
more challenging.
The vast majority (85%) of
nonprofit organizations are using 3 or
more data storage methods.
of nonprofit organizations store their evaluation data92%
92%
3% 11
%
30% 34% 21%
82% 60% 58%82%
Spreadsheet
software
1 method
Word
processing
92. software
2 methods
Paper
copies or
hard copies
3 methods
Survey
software
4 methods
Database
5 methods
16
stateofevaluation.org
Custom database solutions are the frontrunner when it comes to
satisfying the needs of nonprofits. These
databases may better meet the data storage needs of nonprofits,
but also require a level of technical
proficiency that may be out of reach for some. On average,
nonprofits are satisfied almost half the time.
There is room for improvement in matching data storage
systems to the needs and resources of nonprofits.
93. Over half of nonprofit organizations (58%) that evaluated in
2016 used databases to store their evaluation data.
Databases were not the most common method, but were still
relatively commonplace, which is encouraging.
Often, there is a higher barrier to entry to working with
databases as they require set-up, some expertise, and
maintenance.
Databases
Satisfaction with Data Storage Systems
of organizations that store evaluation data use a custom
database, of
these 395 organizations:
58%
DISSATISFIED SATISFIED
18% 47%
55%
47%
45%
45%
44%
16%
18%
95. grantee partners to provide services, implement
initiatives, track progress, and make an impact. Evaluation in
the nonprofit sector has a symbiotic relationship
with evaluation in the philanthropic sector. Decisions made by
funders affect which nonprofits engage in
evaluation, how evaluation is staffed, how evaluation is funded,
which types of evaluation are conducted, and
other factors.
Evaluation is more likely to be a staffed responsibility in
funding institutions compared with nonprofits.
For both funding and nonprofit organizations, reporting to the
Board of Directors is the most prevalent use
of evaluation data (94% of nonprofits, 87% of funders). The use
of evaluation data to plan/revise programs
or initiatives and to plan/revise general strategies was also a
relatively high priority for both funders and
nonprofits. Where funders and nonprofits differ is using data
with each other: 93% of nonprofits use their data
to report to funders and 45% of funders use their data to report
to grantees.
Use of Evaluation Data
NONPROFITS FUNDERS
94% 87%
45%
51%
65%
20%
96. 49%
93%
91%
57%
86%
52%
Report to Board of
Directors
Report to funders
Report to grantees
Plan/revise program
or initiatives
Plan/revise general
strategies
Advocacy or
influence
Share findings with
peers
75%
50%
92%
97. 8%
of funders evaluate their work
of funders have evaluation staff
of nonprofits evaluate their work
of nonprofits have evaluation staff
Source: McCray (2014) Is Grantmaking Getting Smarter?
Source: Buteau & Coffman (2016) Benchmarking Foundation
Evaluation Practices. The Center for Effective Philanthropy and
the Center for Evaluation
Innovation surveyed 127 individuals who were the most senior
evaluation or program staff at their foundations. Foundations
included in the sample
were US-based independent foundations that provided $10
million or more in annual giving or were members of the Center
for Evaluation Innovation’s
Evaluation Roundtable. Only foundations that were known to
have staff with evaluation-related responsibilities were included
in the sample; therefore,
results are not generalizable.
Source: McCray (2014) Is Grantmaking Getting Smarter?
18
stateofevaluation.org
Staff within funding and nonprofit organizations both face
98. barriers to evaluation—but the barriers are different.
Evaluation barriers within nonprofits are mostly due to
prioritization and resource constraints, while barriers
within funding institutions speak to challenges capturing the
interest and involvement of program staff.
The audience for an evaluation may play a large role in
decisions such as the type of evaluation that is
conducted, the focus of the data that is collected, or the amount
of resources that are put into the evaluation.
The top three audiences for funders are internal staff: executive
staff, other internal staff (e.g., program staff),
and the Board of Directors. Internal audiences are two of the
top three audiences for nonprofits, but for
nonprofits, the external funding audience is also a top priority.
Peer organizations are a markedly lower
priority for both nonprofits and funders, which suggests
evaluation data and findings are primarily used for
internal purposes and are not shared to benefit other like-
minded organizations.
Audience for Evaluation
Evaluation Barriers
NONPROFITS’ AUDIENCE RANKINGS FUNDERS’
AUDIENCE RANKINGS
1 1
2 2
3 3
4 4
99. 5 5
Board of Directors
Board of Directors
Internal executive staff
Internal executive staff
Funders
GranteesOther internal staff
Other internal staff
Peer organizations Peer organizations
79% Limited staff time 91% Program staff’s time
48% Limited staff
knowledge, tools, and/or
resources
50% Program staff’s
attitude towards evaluation
52% Insufficient financial
resources
71% Program staff’s
comfort in interpreting/
using data
25% Having staff who
did not believe in the
100. importance of evaluation
40% Program staff’s lack
of involvement in shaping
the evaluations conducted
Source: Buteau & Coffman (2016) Benchmarking Foundation
Evaluation Practices
Source: Buteau & Coffman (2016) Benchmarking Foundation
Evaluation Practices
NONPROFIT CHALLENGES FUNDER CHALLENGES
19
#SOE2016
Purpose. The State of Evaluation project is the first nationwide
project that systematically and repeatedly
collects data from US nonprofits about their evaluation
practices. Survey results are intended to build
understanding for nonprofits, to compare their evaluation
practices to their peers; for donors and funders,
to better understand how they can support evaluation practice
throughout the sector; and for evaluators, to
have more context about the existing evaluation practices and
capacities of their nonprofit clients.
Sample. The State of Evaluation 2016 sampling frame is
501(c)3 organizations in the US that updated their IRS
Form 990 in 2013 or more recently and provided an email
address in the IRS Form 990. Qualified organizations
were identified through a GuideStar dataset. We sent an
101. invitation to participate in the survey to one unique
email address for each of the 37,440 organizations in the
dataset. With 1,125 responses, our response
rate is 3.0%. The adjusted response rate for the survey is 8.35%
(removing from the denominator bounced
invitations, unopened invitations, and individuals who had
previously opted out from SurveyMonkey). The
survey was available online from June 13, 2016, through August
6, 2016, and six reminder messages were
sent during that time. Organizations were asked to answer
questions based on their experiences in 2015.
Analysis. Once the survey closed, the data was cleaned and
many relationships between responses were
tested for significance in SPSS. In State of Evaluation 2016, we
note which findings are statistically significant
and the significance level. Not all of the findings contained
herein are statistically significant, but we believe
that they have practical value to funders, nonprofits, and
evaluators and so are included in these results.
Representation. We compared organizations that participated in
SOE 2016 with The Urban Institute’s
Nonprofit Sector in Brief 2015. Our random sample had less
representation from organizations with budgets
under $499,999 than the sector as a whole, and more responses
from organizations with budgets of $500,000
and larger. One third (33%) of our sample was from the South
(14,314 organizations), 26% from the West
(11,204), 18% from the Midwest (7,830), and 23% from the
Northeast (9,866).
METHODOLOGY
Responses were mostly received from
executive staff (71% of 2016 survey
102. respondents), followed by fundraising
and development staff (11%),
administrative staff (4%), program
staff (4%), evaluation staff (3.5%),
Board Members (3%), financial staff
(1%), and other (3%). The top three
sources of funding for responding
organizations were individual donor
charitable contributions (27% of
organizations cited these as a funding
source), government grants and
contracts (26%), and foundation or
philanthropic charitable contributions
(20%).
50%
40%
30%
20%
10%
0%
Less than
$100,000
$100,000-
$499,999
$1m
-$4.9m
SOE Urban
104. Innovation Network is a nonprofit evaluation, research, and
consulting firm. We provide
knowledge and expertise to help nonprofits and funders learn
from their work to improve
their results.
ABOUT THE AUTHORS
Johanna Morariu, Director, Innovation Network
Johanna is the Co-Director of Innovation Network and provides
organizational leadership,
project leadership, and evaluation expertise to funder and
nonprofit partners. She has over a
decade of experience in advocacy evaluation, including
advocacy field evaluation, campaign
evaluation, policy advocacy evaluation, organizing and social
movements evaluation, and
leadership development evaluation.
Kat Athanasiades, Senior Associate, Innovation Network
Kat leads and supports many of Innovation Network’s
engagements across the evaluation
spectrum—from evaluation planning to using analysis to advise
organizations on strategy.
She brings skills of evaluation design and implementation,
advocacy evaluation, evaluation
facilitation and training, data collection tool design, qualitative
and quantitative data analysis,
and other technical assistance provision.
Veena Pankaj, Director, Innovation Network
Veena is the Co-Director of Innovation Network and has more
than 15 years of experience
navigating organizations through the evaluation design and
105. implementation process. She
has an array of experience in the areas of health promotion,
advocacy evaluation and health
equity. Veena leads many of the organization’s consulting
projects and works closely with
foundations and nonprofits to answer questions about program
design, implementation,
and impact.
Deborah Grodzicki PhD, Senior Associate, Innovation Network
Deborah provides project leadership, backstopping, and staff
management in her role as
Senior Associate at Innovation Network. With nearly a decade
of experience in designing,
implementing, and managing evaluations, Deborah brings skills
of evaluation theory and
design, advocacy evaluation, quantitative and qualitative
methodology, and capacity
building.
October 2016
This work is licensed under the Creative Commons BY-
NC-SA License. It is attributed to Innovation Network, Inc.
202-728-0727
www.innonet.org
[email protected]
@InnoNet_Eval
Evaluation and Program Planning 58 (2016) 208–215
106. Original full length article
Barriers and facilitators to evaluation of health policies and
programs:
Policymaker and researcher perspectives
Carmen Huckel Schneidera,b,*, Andrew J. Milatc,d, Gabriel
Moorea
a Menzies Centre for Health Policy, University of Sydney,
Level 6 The Hub, Charles Perkins Centre D17, The University
of Sydney NSW 2006, Australia
b The Sax Institute, Level 13, Building 10, 235 Jones Street,
Ultimo NSW 2007, Australia
c New South Wales Ministry of Health, 73 Miller St, North
Sydney NSW 2060, Australia
d Sydney Medical School, University of Sydney, Edward Ford
Building (A27), Fisher Road, University of Sydney NSW 2006,
Australia
A R T I C L E I N F O
Article history:
Received 9 December 2015
Received in revised form 28 April 2016
Accepted 17 June 2016
Available online 11 July 2016
Keywords:
Policy evaluation
Health policy
Public health administration
Public health
A B S T R A C T
107. Our research sought to identify the barriers and facilitators
experienced by policymakers and evaluation
researchers in the critical early stages of establishing an
evaluation of a policy or program. We sought to
determine the immediate barriers experienced at the point of
initiating or commissioning evaluations
and how these relate to broader system factors previously
identified in the literature.
We undertook 17 semi-structured interviews with a purposive
sample of senior policymakers (n = 9)
and senior evaluation researchers (n = 8) in Australia.
Six themes were consistently raised by participants: political
influence, funding, timeframes, a ‘culture
of evaluation’, caution over anticipated results, and skills of
policy agency staff. Participants also reflected
on the dynamics of policy-researcher relationships including
different motivations, physical and
conceptual separation of the policy and researcher worlds,
intellectual property concerns, and trust.
We found that political and system factors act as macro level
barriers to good evaluation practice that
are manifested as time and funding constraints and contribute to
organisational cultures that can come to
fear evaluation. These factors then fed into meso and micro
level factors. The dynamics of policy-
researcher relationship provide a further challenge to evaluating
government policies and programs.
ã 2016 Elsevier Ltd. All rights reserved.
Contents lists available at ScienceDirect
Evaluation and Program Planning
108. journal homepage: www.elsevier.com/locat e/e valprogplan
1. Introduction
Previous research suggests that the number of public sector
policies and programs that are evaluated in industrialized
nations
is low (Oxman, Bjorndal, & Becerra-Posada, 2010) and that
policy
makers, who we define as those working in the public sector to
plan, design, implement and oversee public programming, face
significant hurdles in establishing and implementing policy and
program evaluation (Evans, Snooks, & Howson, 2013). These
challenges sit within what is known as the ‘messy’ business of
public policy; including political compromise, collaboration and
coordination amongst multiple stakeholders and strategic and
tactical decision-making, that have long been recognised in the
* Corresponding author. Present address: Menzies Centre for
Health Policy,
University of Sydney Level 6 The Hub, Charles Perkins Centre
D17, The University of
Sydney NSW 2006, Australia.
E-mail addresses: [email protected]
(C. Huckel Schneider), [email protected] (A.J. Milat),
[email protected] (G. Moore).
http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011
0149-7189/ã 2016 Elsevier Ltd. All rights reserved.
evaluation policy literature (Head & Alford, 2015; Parsons,
2002;
House, 2004).
There is however notable potential to utilise evaluation to draw
lessons about public health policies’ and programs’
effectiveness,
109. applicability and efficiency, all of which can contribute to a
better
evidence-base for informed decision making. This is
particularly
the case when implementation of a policy or program is studied
as
experimentation in practice (Petticrew et al., 2005; Craig,
Cooper,
Gunnell, Gunnell, & Gunnell, 2012; Banks, 2009; Palfrey,
Thomas, &
Phillips, 2012; Patton, 1997). There is also a growing
expectation
that evaluation should be used by policymakers to inform public
health policies and programs (Oxman et al., 2010; Chalmers,
2003;
NSW Government, 2013; Patton, 1997) and in theory, use of
evaluation should have synergies with principles of new public
management style public service, such as those widely adopted
in
countries such as United Kingdom, Australia, Scandinavia and
North America (Dahler-Larsen, 2009; Johnston, 2000).
There is a wide range of literature on policy and program
evaluation, including considerable debate on appropriate
evalua-
tion methodologies, (DeGroff & Cargo, 2009; Learmonth, 2000;
Rychetnik, 2010; Smith & Petticrew, 2010; White, 2010) and
the
http://crossmark.crossref.org/dialog/?doi=10.1016/j.evalprogpla
n.2016.06.011&domain=pdf
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011
110. http://dx.doi.org/10.1016/j.evalprogplan.2016.06.011
http://www.sciencedirect.com/science/journal/01497189
www.elsevier.com/locate/evalprogplan
C. Huckel Schneider et al. / Evaluation and Program Planning
58 (2016) 208–215 209
challenges of evaluation research uptake (Lomas, 2000; Dobbins
et al., 2009; Chelimsky, 1987; Weiss, 1970). Amongst this
literature
there is awareness that evaluation of public health policies and
programs is complex, highly context dependent, plagued by
ideological debate and subject to ongoing challenges of
research-
er—policy collaboration (Flitcroft, Gillespie, Salkeld, Carter, &
Trevena, 2011; Ross, Lavis, & Rodriguez, 2003; Haynes et al.,
2012;
Petticrew, 2013). There are however gaps in the literature about
barriers and facilitators to undertaking quality evaluation at the
level of public service where evaluations tend to be initiated
and
commissioned, namely in middle management and by program
directors. There are also few studies that deliberately seek to
gather, compare and analyse perceived barriers from both policy
makers and evaluation researchers operating within the same
policy space.
In this paper, we aim to explore the early stages of policy and
program evaluation, including the decision to evaluate,
preparing
for evaluation, commissioning evaluations and how
policymakers
and evaluation researchers work together for the purpose of
high
quality evaluation undertaken as part of evaluation research.
Our
111. primary aim is to:
� Identify barriers and facilitators to evaluation at the initiation
and commissioning stages, from within both the policy making
as well as evaluation research environments.
To achieve this overarching aim, we address the following:
� How evaluations of health policies and programs are planned,
commissioned and undertaken in Australia;
� The barriers and facilitators experienced by policymakers and
evaluation researchers in the early stages of establishing an
evaluation of a policy or program and in undertaking commis-
sioned evaluation work; and
� Initiatives that may assist government to overcome barriers to
effective evaluation.
The study was designed and undertaken at the Sax Institute, an
independent research and knowledge translation institute
located
in Sydney Australia, to inform the development of programs to
assist and improve evaluation practice in health policy.
Box 1. Interview schedule questions.
Questions for policy makers
1. Could you please describe how policies and programs are eva
2. What factors influence decisions to undertake evaluations of
p
3. What factors act as barriers or facilitators to undertaking
qualit
4. What is the best way to engage with evaluators?
5. Now, thinking about commissioning external evaluations,
what
112. evaluator?
Questions for evaluation researchers
1. From your experience could you please describe how
governm
2. What is the quality of such evaluations?
3. What are the barriers and facilitators to undertaking good eva
programs?
4. What is the best way for policy makers to engage researchers/
5. What can policy makers do to effectively commission an
evalu
2. Methods
We undertook 17 semi-structured interviews with a purposive
sample of policymakers (n = 9) and evaluation researchers (n =
8) in
Australia. Policymakers had a minimum of 10 years of
experience
in a health policy or program delivery role in either state or
federal
government. They were also currently employed in a health
related
government department and had previously been involved with
the evaluation of government policies and programs. Evaluation
researchers had a minimum of eight years of experience in
undertaking evaluations of health or health related policies or
programs run by state or federal level government agencies in
Australia and were senior academics at a university.
Separate but similar interview schedules were developed for
policymakers and evaluation researchers. The interviews com-
prised five open-ended questions (see Box 1). The study was
approved by the Humanities Human Research Ethics Committee,
113. University of Sydney, Australia. Informed written consent was
obtained from all participants.
Policymakers were asked to reflect on their experiences of how
public health policies and programs are evaluated, decisions
about
whether or not to evaluate them, ways in which policymakers
engage or work with evaluation researchers and what would
help
them to commission an evaluation with an external evaluator.
Evaluation researchers were asked about their experiences of
how government health policies and programs are evaluated in
Australia, working with government agencies and asked about
what would facilitate the process of undertaking evaluations of
public policies and programs.
Sixteen interviews were voice-recorded and transcribed and
one was documented with detailed notes. We used thematic
analysis to draw conclusions from the texts.
We used thematic analysis as a realist method—assuming that
by-and-large the responses of the interviewees to questions
posed
reflect the experiences, meaning and reality of the participants.
Coding and identification of themes broadly followed the
method
of Braun and Clarke (2006) and proceeded in four steps.
First, texts were de-identified and applied with markers to
categorise policymakers (PM1-9) or evaluation researchers
(ER1-
8). Second, author CHS inductively coded eight texts (four from
each category) using a free coding process. Codes were collated
and
merged within parent codes where possible. Authors CHS and
114. GM
luated in your agency?
olicies or programs (in your experience)?
y evaluations?
would help you to commission an evaluation with an external
ent health policies and programs are evaluated?
luations of health or health-related government policies and
evaluators to evaluate government policies and programs?
ation?
210 C. Huckel Schneider et al. / Evaluation and Program
Planning 58 (2016) 208–215
then grouped parent codes into categories and compiled them in
a
proto-coding manual. Third, nine transcripts were distributed
amongst all authors, CHS, AM, GM to test-code using the
proto-
coding manual. Where necessary codes were supplemented
and/or
merged and re-categorised. The coding manual was finalised
comprising five main categories, each with several sub-
categories
of codes.
1) Planning and commissioning evaluations of policies and
programs
2) Relationships between policy makers and evaluation
research-
ers
115. 3) Conducting evaluations of policies and programs
4) Utility of evaluation
5) Current evaluation practice
Fourth, author CHS re-coded all texts using the coding manual.
All authors then reviewed each code to interpret meaning and
emerging themes.
3. Results
Evaluations of government programs in Australia are generally
initiated and commissioned at the level of the program or policy
to be evaluated � though this may occur within higher level
evaluation frameworks. Program managers or directors of a
programmatic unit may take direct responsibility for setting
initial scope requirements of an evaluation and establishing a
committee structure to oversee competitive or selective–tender-
ing to an external evaluator. In some government agencies there
may be specific programmatic units that perform internal
evaluations or facilitate the process of commissioning
evaluations
and establishing partnerships with research institutions.
Program
level managers and staff usually take on the role of facilitating
access to data, site visits and other necessary permissions for
evaluation work to be undertaken. Models of engagement
between evaluation researchers and policy agencies vary from
case to case. Respondents in our interviews described a range of
‘contracting out’ models, such as competitive and selective
tender, pre-qualification/panel schemes as well as ‘collaborative
engagement’ models such as co-production and researcher
initiated evaluation. A discussion of these types of models of
engagement is reported on elsewhere [Reference removed in
blinded manuscript].
116. Here, we discuss results of the interviews that shed light on our
main aim: identifying barriers and facilitators to evaluation
from
within both the policy making as well as evaluation research
environments. A broad array of themes emerged from the
interviews that related to experiences that facilitated or acted as
a barrier to evaluation. Of these, six were consistently raised as
particularly important by a high number of study participants:
timeframes (15 participants), political influence (12
participants),
funding (10 participants), having a ‘culture of evaluation’
(9 participants), caution over anticipated evaluation outcomes
(8 participants) and skills and ability of policy agency staff
(8 participants).
3.1. Timeframes
Evaluation timeframes was the issue raised by more study
participants than any other. It acted as a barrier or facilitator in
three ways:
First, evaluations are often initiated late in the policy or
program implementation process. This can have several conse-
quences: funding to undertake evaluation can simply run out; or
the situation arises that no adequate data has been collected at
an
early stage of implementation that can act as a baseline. There
may
also simply be inadequate time to undertake data collection or
conduct analysis for rigorous evaluation. Many interviewees
found
that the consequence of starting late was to undertake small
evaluations limited in scope, often evaluating processes rather
than outcome. One interviewee reported on limiting the evalua-
tion of a complex intervention on childhood well-being to a
review
117. of relationships between stakeholders in the process.
“One of the things that makes it difficult is when something has
been implemented and then ‘oh, let’s do an evaluation’. So you
don’t have baseline data, it hasn’t been set up to exclude all
sorts of
other factors that may have brought about change” (ER7)
Second, timeframes are often too short to conduct an
evaluation. The vast majority of public health policies and
programs will require years to have a measureable impact on
health outcomes. For example, one interviewee had been
involved
in evaluation reporting for obesity prevention programs that had
to
focus on logic of assumptions to conduct the evaluation.
“But timeframe is another problem I think. Often, the process as
a
new program gets rolled out and it’s expected to have massive
impact somehow within a 12 month period and it’s just not a
realistic timeframe to be able to demonstrate change” (ER3)
The time required to design a complex evaluation and
undertake complex analysis is also beyond what is feasible in
many of the short timeframes of evaluations.
“I mean if you want to do a proper evaluation you can’t have a
turnaround time of one month or something like six weeks. You
know if you’re going to do proper focus groups or you know,
design
questionnaires . . . and I notice a lot of times they want this
quick
turnaround time” (PM1)
Furthermore, timeframes further limit who might be engaged
118. to undertake evaluations. Academic researchers are less likely
to be
able to dedicate time to an evaluation that needs to be done
quickly, especially if requiring the recruitment of staff.
“But one of the barriers for academic groups being part of it is
the
timeframes. So you know you’ll normally sort of have to put in
your
tender bid very quickly and then you normally have to start
work
on it instantaneously, and academic groups are not always able
to
be quite so nimble as that” (ER5)
Third, policymakers reported general lack of time between their
everyday tasks to dedicate to careful consideration of evaluation
planning, including program objectives and evaluation
questions.
“But because of the kind of immediacy of the job, and even if
the
intellectual capacity is there to really engage with the process
there
often isn’t the time. You know the writing of briefs, and
advising
people, and doing this and that. I know it is about getting the
right
balance . . . but it limits the time available to sit down and
really
nut out your questions” (PM4)
3.2. Political influence
Many participants had experienced political imperatives as a
major barrier to conducting evaluations; often citing it as a
119. more
significant issue than funding, timing or the knowledge and
skills
of agency staff.
“I think we tend to complain that there aren’t resources and that
there isn’t enough people around with the ability to do it. To me
those are important but they’re much less significant than the
political cloud that surrounds the whole business” (ER1).
Political imperatives included those situations where political
preferences influenced whether an evaluation goes ahead;
pressure from higher up to have a policy or program succeed or
C. Huckel Schneider et al. / Evaluation and Program Planning
58 (2016) 208–215 211
not; and any decisions based on the sensitivity of a policy or
program. Certain issue areas were more likely to be subject to
ideological debate and therefore political pressure than others;
as
was the case for certain approaches. In public health for
example,
debate between community vs individual responsibility as a
basic
paradigm that underlies policy decisions can make one issue
area
or policy response more politically sensitive than another. In
some
cases public health programs and policies can proceed at arms-
length to such debates, in other cases they can be heavily
influenced or weighed down by political imperatives and
uncertainty (Head & Alford, 2015).
Political influence was seen to be stronger at more senior levels
120. of government agencies. Election cycles reduced timeframes for
action and subsequently evaluation and reporting. The time
required for a policy or program to actually show an effect may
thus lie outside of the available timeframe required.
“Policymakers obviously work with a lot of constraints and
political constraints. Even when it is personally recognised that
they really need is five years to see if something is making a
difference they are working in an environment that doesn’t
allow
for that” (ER3)
Political ideology and experiences of highest level public
servants and elected officials were seen to trickle down into
government agencies determining what programs are prioritised
or placed under scrutiny.
“I wouldn’t do this—but it’s possible that if something’s
coming
under attack and . . . or is contentious and either the government
of the minister or the department thinks that it’s a valuable
program that they’ll construct an evaluation or research around
it
in such a way that it will deliver the answers they’re looking
for”
(PM7)
Political imperatives not only influenced if, when and with
what funding a policy or program might be evaluated, but
indirectly determined the focus of evaluations. For example,
there
may be a preference for measuring cost-utility rather than
population health outcomes. In some situations, it may be more
important to be immediately seen to be ‘doing something’ rather
than having an actual effect.
121. “If it is a politically motivated process, then the aim is to be
recognised for doing the program or having the policy. The
actual
‘Was it really effective?’ . . . I don’t think those questions are
always required or wanted” (ER2)
Both evaluation researchers and policymakers recognised the
frustration than can result when evaluation results and recom-
mendations formulated from them are not acted upon.
3.3. Funding
Inadequate funding for evaluation was generally seen to
compromise the quality of evaluations. It was also linked to
evaluations that were initiated late in the implementation phase
and sometimes after funding had run out.
“So barriers—always money . . . We used to occasionally find
out
some of the more complex analyses, and if you don’t have
money to
do that then it obviously stymies your ability to do some of the
more complex stuff” (PM6)
Interviewees did not however experience a lack of funding for
policies and programs per se, but rather that too little funding
was
allocated specifically for evaluation in the policy and program
design phase.
“They had money for the implementation of it but no money had
been set aside for the evaluation for example. It was how the
money
was packaged” (ER5)
Having evaluation written in to the program design from the
beginning was seen to increase the likelihood that adequate
122. funding for the evaluation would be available.
“If it’s written in that the program’s going to have an
evaluation,
then that would help, and if there’s resources identified to do it,
both money and people—and skills so the whole set” (PM9)
Lack of funding also influenced whether an evaluation would be
undertaken with external evaluators or undertaken in-house. In
situations where agency resources are scarce and there were
fewer
opportunities for staff to develop skills, the quality of
evaluations
was seen to be compromised, or at least limited in
methodological
scope.
3.4. Culture of evaluation
A culture of evaluation with a government agency refers to a
value being placed on evaluation in all decision making
processes
such as priority setting, program design and implementation,
and
funding (Dobbins et al., 2009). Both policymakers and
evaluation
researchers connected the extent to which such values were
present within a government agency as a key facilitator to good
evaluation practice. However, it was an area commonly
identified
for potential improvement.
“ . . . so I really think it just depends on whether people are
focused
on doing them or not” (ER1)
123. A culture of evaluation was seen to require a high degree of
motivation to evaluate policies and programs, and opportunities
to
share experiences, positive and negative, with colleagues. It was
also linked with having high level championing of evaluation
and a
staff mix with skills, experience and motivation to integrate
evaluation into standard practice.
“I mean it’s probably something to do with people that are in
decision-making roles I expect � how much of an emphasis they
place on evaluation and then how much they know about it”
(PM9)
A culture of evaluation was seen to have the potential to
counteract some of the nervousness surrounding evaluation. A
poor evaluation culture was characterised by a lack of support
for
government agency staff that were confused about how to
conduct
an evaluation.
3.5. Caution over anticipated outcomes
Several policymakers and evaluation researchers recalled
experiences where fear or worry of possible negative evaluation
results (ie. results showing that a program failed to have an
effect,
or had made a situation worse) acted as a barrier to evaluation.
This
fear appeared to act as a more significant barrier for large scale
programs or policies.
“not actually wanting to deal with the unpalatable truth then
obviously that’s a major barrier” (ER2)
124. In many cases, evaluations are undertaken specifically to
demonstrate positive results and participants reported that briefs
for external evaluators were written in a way that implies a
certain
result is expected.
“Of course there are some agendas behind evaluations, where
the
aim is to snow success rather than learn” (PM2)
This can influence the eventual mode of engagement with
external researchers; private consulting firms were often seen as
more likely to report positively on policies and programs, in a
way
that is less of a risk in terms of consequences of possible
negative
evaluation outcomes. It also led to situations where there was an
understanding that results of evaluation would not be published;
and in such cases there was a concern that quality of evaluations
212 C. Huckel Schneider et al. / Evaluation and Program
Planning 58 (2016) 208–215
was generally lower due to the absence of expectations of
public
scrutiny.
“Within government, and also most organisations, people would
like to see very positive stories coming out, and tend to be quite
wary of independent groups taking a look at what they are
doing.
Because you’ll never know what they’ll say” (ER7)
Interviewees postulated that the underlying cause of this
degree of caution was a need to demonstrate positive outcomes