This document proposes a common outcome framework to help nonprofits more efficiently measure performance. It describes developing 14 program area profiles with outcomes, indicators, and sequence charts. These informed a draft common framework with core outcomes across program areas. Further testing and expansion is needed to refine and apply the framework more broadly. The goal is a standardized yet flexible approach to help nonprofits and funders assess impact in a consistent yet relevant way.
The Transforming Health Systems (THS) initiative was one of The Rockefeller Foundation’s largest global health initiatives. Aligned with the Foundation’s mission to promote the well-being of humanity, THS aimed to improve the health status and financial resilience of poor and otherwise vulnerable populations through activities promoting improved health systems performance and the expansion of universal health coverage (UHC).
This report synthesizes findings from a five-year, multicomponent evaluation of the THS initiative. The objectives of the evaluation were to assess i) the effectiveness of the three core strategies – global advocacy, regional networks, and country-level investments – employed under THS to advance progress toward UHC in low- and middle-income countries in four focus countries, ii) the overall effectiveness and influence of the initiative, and iii) the Foundation’s legacy in the UHC arena. A key component of the evaluation was to document lessons learned from achievements and challenges to inform the development of future initiatives at the Foundation.
Overall, the evaluation found the THS initiative to be successful in its efforts to activate a global movement to accelerate progress toward UHC. The Foundation catalyzed and shaped the global UHC movement and, ultimately, influenced the inclusion of UHC in the Sustainable Development Goals (SDGs) of the post-2015 agenda. It also created enduring cross-learning platforms and tools to support country progress toward the SDGs’ UHC targets. Although THS gained less traction in advancing UHC through its focus country investments, its success in making UHC a global development target and creating networks and coalitions to support UHC reform efforts in LMICs will likely have country-level impacts for years to come.
Putting “Impact” at the Center of Impact Investing: A Case Study of How Green...The Rockefeller Foundation
More than ever before, investors are looking to put their money where their values are. As a result, impact investing has burgeoned into an over $100 billion industry in just over ten years. But how do impact investors know whether their money is truly having a positive impact on people and
the planet? How can these investors better manage their results, and use material data – both positive and negative – about social and environmental performance to maximize their impact?
This case study documents the journey of one organization, Green Canopy Homes – and its financing arm, Green Canopy Capital – toward more systematically thinking about, measuring, and managing its impact. While developing the impact thesis for its resource-efficient homes, Green Canopy applied a theory of change tool, an approach common within the social sector, to systematically map the causal pathways between its strategies and intended impact. Its rationale for adopting this approach was simple: use it to maximize impact, and understand and minimize possible harm. The tool also effectively positioned Green Canopy to measure and communicate about its social and environmental performance, and to make client-centric adaptations to its business.
The case study provides an illuminating example of how investors can adapt theory of change to serve their impact management needs. By demonstrating the relevance and transferability of this tool for articulating, measuring, and managing impact, the hope is that this case study can contribute to strengthening other investors’ approaches, in turn contributing to building the evidence base for the “impact” of impact investments.
This guide is designed for program officers to use in their work related to networks, coalitions, and other relationship-based structures as part of their initiatives, program strategies, and outcomes. It offers a set of core components that make up the basics of strategizing, implementing, and sustaining inter-organizational relationships and structures. You can work through the guide from beginning to end or jump to specific issues with which you might be struggling. Every component suggests concrete “actions” or questions that a program officer can apply.
Poster presented at the Crop Science Society of the Philippines, Inc.'s 42nd Scientific Conference from April 16 to 20, 2012 in Puerto Princesa City, Palawan, Philippines
Organisations are increasingly realising the power of networks to create the greatest impact for society. Working collaboratively with a network of partners can increase your reach, generate efficiencies and stimulate innovation.
Yet, approaches to working in networks vary widely and each approach has a unique set of associated challenges. In our latest Briefing Paper, Aleron brings together the insight of expert practitioners in the field to bring clarity to the complex area of network working in the social sector.
The Transforming Health Systems (THS) initiative was one of The Rockefeller Foundation’s largest global health initiatives. Aligned with the Foundation’s mission to promote the well-being of humanity, THS aimed to improve the health status and financial resilience of poor and otherwise vulnerable populations through activities promoting improved health systems performance and the expansion of universal health coverage (UHC).
This report synthesizes findings from a five-year, multicomponent evaluation of the THS initiative. The objectives of the evaluation were to assess i) the effectiveness of the three core strategies – global advocacy, regional networks, and country-level investments – employed under THS to advance progress toward UHC in low- and middle-income countries in four focus countries, ii) the overall effectiveness and influence of the initiative, and iii) the Foundation’s legacy in the UHC arena. A key component of the evaluation was to document lessons learned from achievements and challenges to inform the development of future initiatives at the Foundation.
Overall, the evaluation found the THS initiative to be successful in its efforts to activate a global movement to accelerate progress toward UHC. The Foundation catalyzed and shaped the global UHC movement and, ultimately, influenced the inclusion of UHC in the Sustainable Development Goals (SDGs) of the post-2015 agenda. It also created enduring cross-learning platforms and tools to support country progress toward the SDGs’ UHC targets. Although THS gained less traction in advancing UHC through its focus country investments, its success in making UHC a global development target and creating networks and coalitions to support UHC reform efforts in LMICs will likely have country-level impacts for years to come.
Putting “Impact” at the Center of Impact Investing: A Case Study of How Green...The Rockefeller Foundation
More than ever before, investors are looking to put their money where their values are. As a result, impact investing has burgeoned into an over $100 billion industry in just over ten years. But how do impact investors know whether their money is truly having a positive impact on people and
the planet? How can these investors better manage their results, and use material data – both positive and negative – about social and environmental performance to maximize their impact?
This case study documents the journey of one organization, Green Canopy Homes – and its financing arm, Green Canopy Capital – toward more systematically thinking about, measuring, and managing its impact. While developing the impact thesis for its resource-efficient homes, Green Canopy applied a theory of change tool, an approach common within the social sector, to systematically map the causal pathways between its strategies and intended impact. Its rationale for adopting this approach was simple: use it to maximize impact, and understand and minimize possible harm. The tool also effectively positioned Green Canopy to measure and communicate about its social and environmental performance, and to make client-centric adaptations to its business.
The case study provides an illuminating example of how investors can adapt theory of change to serve their impact management needs. By demonstrating the relevance and transferability of this tool for articulating, measuring, and managing impact, the hope is that this case study can contribute to strengthening other investors’ approaches, in turn contributing to building the evidence base for the “impact” of impact investments.
This guide is designed for program officers to use in their work related to networks, coalitions, and other relationship-based structures as part of their initiatives, program strategies, and outcomes. It offers a set of core components that make up the basics of strategizing, implementing, and sustaining inter-organizational relationships and structures. You can work through the guide from beginning to end or jump to specific issues with which you might be struggling. Every component suggests concrete “actions” or questions that a program officer can apply.
Poster presented at the Crop Science Society of the Philippines, Inc.'s 42nd Scientific Conference from April 16 to 20, 2012 in Puerto Princesa City, Palawan, Philippines
Organisations are increasingly realising the power of networks to create the greatest impact for society. Working collaboratively with a network of partners can increase your reach, generate efficiencies and stimulate innovation.
Yet, approaches to working in networks vary widely and each approach has a unique set of associated challenges. In our latest Briefing Paper, Aleron brings together the insight of expert practitioners in the field to bring clarity to the complex area of network working in the social sector.
Entrepreneurial Adaptation and Social Networks: Evidence from a Randomized Ex...Greg Bybee
Working paper by Charles (Chuck) Eesley (Stanford University) and Lynn Wu (University of Pennsylvania), based on research on NovoEd's experiential learning platform.
We examine the performance of early-stage entrepreneurs before and after randomly showing them different approaches to the strategic process. In isolation, a planning strategic process is more effective than a more adaptive approach. Contrary to the finding that entrepreneurs often change their business model and strategic direction frequently, we find that instructing entrepreneurs to have a strong, persistent vision for their startup often results in better performance in the early stages. In particular, the results show that adding a diverse network tie alone is less effective than combining a diverse tie with a specific strategic approach. In contrast to prior work that shows that entrepreneurs often begin their ventures with a cohesive, closed network high in trust then transition later to a more diverse network, we find that early stage ventures appear to be better off with more diverse social ties in the beginning, particularly if a more adaptive approach is adopted for the venture’s strategy. The results suggest that when formulating a strategic approach there is an important interaction between social networks and the strategy formulation process that must be taken into consideration.
4th Wheel aims to aid implementation staff in implementing social projects, conceptualizing program design, developing outreach and marketing plans, forming partnerships, engaging employees based on core competencies, and assessing their organisational impact. Currently we offer training programs on program design, implementation and impact evaluation methodologies and techniques at all organisational levels, with a special focus on field staff whose role is crucial, as they have regular engagement with beneficiaries and have to report to the management.
We offer a broad-range of consultancy and training services to ensure that stakeholders are equipped to conceptualize and implement social programs that are impactful, measurable and sustainable.
Entrepreneurial Adaptation and Social Networks: Evidence from a Randomized Ex...Greg Bybee
Working paper by Charles (Chuck) Eesley (Stanford University) and Lynn Wu (University of Pennsylvania), based on research on NovoEd's experiential learning platform.
We examine the performance of early-stage entrepreneurs before and after randomly showing them different approaches to the strategic process. In isolation, a planning strategic process is more effective than a more adaptive approach. Contrary to the finding that entrepreneurs often change their business model and strategic direction frequently, we find that instructing entrepreneurs to have a strong, persistent vision for their startup often results in better performance in the early stages. In particular, the results show that adding a diverse network tie alone is less effective than combining a diverse tie with a specific strategic approach. In contrast to prior work that shows that entrepreneurs often begin their ventures with a cohesive, closed network high in trust then transition later to a more diverse network, we find that early stage ventures appear to be better off with more diverse social ties in the beginning, particularly if a more adaptive approach is adopted for the venture’s strategy. The results suggest that when formulating a strategic approach there is an important interaction between social networks and the strategy formulation process that must be taken into consideration.
4th Wheel aims to aid implementation staff in implementing social projects, conceptualizing program design, developing outreach and marketing plans, forming partnerships, engaging employees based on core competencies, and assessing their organisational impact. Currently we offer training programs on program design, implementation and impact evaluation methodologies and techniques at all organisational levels, with a special focus on field staff whose role is crucial, as they have regular engagement with beneficiaries and have to report to the management.
We offer a broad-range of consultancy and training services to ensure that stakeholders are equipped to conceptualize and implement social programs that are impactful, measurable and sustainable.
Performance Assessment of Agricultural Research Organisation Priority Setting...iosrjce
IOSR Journal of Business and Management (IOSR-JBM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of business and managemant and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications inbusiness and management. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
CHAPTER SIXTEENUnderstanding Context Evaluation and MeasuremeJinElias52
CHAPTER SIXTEEN
Understanding Context: Evaluation and Measurement in Not-for-Profit Sectors
Dale C. Brandenburg
Many individuals associated with community agencies, health care, public workforce development, and similar not-for-profit organizations view program evaluation akin to a visit to the dentist’s office. It’s painful, but at some point it cannot be avoided. A major reason for this perspective is that evaluation is seen as taking money away from program activities that perform good for others, that is, intruding on valuable resources that are intended for delivering the “real” services of the organization (Kopczynski & Pritchard, 2004). A major reason for this logic is that since there are limited funds available to serve the public good, why must a portion of program delivery be allocated to something other than serving people in need? This is not an unreasonable point and one that program managers in not-for-profits face on a continuing basis.
The focus of evaluation in not-for-profit organization has shifted in recent years from administrative data to outcome measurement, impact evaluation, and sustainability (Aspen Institute, 2000), thus a shift from short-term to long-term effects of interventions. Evaluators in the not-for-profit sector view their world as the combination of technical knowledge, communication skills, and political savvy that can make or break the utility and value of the program under consideration. Evaluation in not-for-profit settings tends to value the importance of teamwork, collaboration, and generally working together. This chapter is meant to provide a glimpse at a minor portion of the evaluation efforts that take place in the not-for-profit sector. It excludes, for example, the efforts in public education, but does provide some context for workforce development efforts.
CONTRAST OF CONTEXTS
Evaluation in not-for-profit settings tends to have different criteria for the judgment of its worth than is typically found in corporate and similar settings. Such criteria are likely to include the following:
How useful is the evaluation?
Is the evaluation feasible and practical?
Does the evaluation hold high ethical principles?
Does the evaluation measure the right things, and is it accurate?
Using criteria such as the above seems a far cry from concepts of return on investment that are of vital importance in the profit sector. Even the cause of transfer of training can sometimes be of secondary importance to assuring that the program is described accurately. Another difference is the pressure of time. Programs offered by not-for-profit organizations, such as an alcohol recovery program, take a long time to see the effects and, by the time results are viewable, the organization has moved on to the next program. Instead we often see that evaluation is relegated to measuring the countable, the numbers of people who have completed the program, rather than the life-changing impact that decreased alcohol abuse has on ...
This report provides a look at those organizations that recognize and benefit from what can be achieved when social technologies are paired with the new ways of working they enable. That paired approach delivers shared value; generating complex business outcomes for the organization while making employee work experience easier and more fulfilling.
IMPACT CONVERSATIONS FOR FOUNDATIONS: LEAN IMPACT METHODOLOGYSoPact
Nonprofits and other grantee program organizations have a hard fought impact on the world. Behind the effort, fueling that work, social impact is driven by funders. But if someone were to ask your foundation, ‘what is your impact?’ Would your answer be able to go beyond stories from the field and general output numbers?
Lean Impact Measurement is a data-driven process for impact measurement and assessment developed by SoPact, social impact consultants, and a research team out of the University of Melbourne. This four step guide combines technological expertise tailored to the unique needs of the social sector with social impact expertise. From offline mobile data collection to metrics selection, Lean Impact Measurement’s framework intends to be actionable for realistic implementation. The methodology is inspired from lean startup movement and supported by a technology that helps streamline process in a short time.
So what’s your impact? Drive crucial grant impact conversation.
Running head: IMPROVEMENT OPPORTUNITY 1
IMPROVEMENT OPPORTUNITY 3
Quality Tool Analysis
The supply chain management sphere has an issue when considering a pharmacy facility organization. The problem with the pharmaceutical supply chain operations has resulted from all individuals lacking access to the most noteworthy quality medication and in addition keep up levels in medical care that occurs in the arrangement of supplying medicine, notwithstanding giving the improvement of new knowledge, aptitudes and systems that invigorate the advancement of medicine supply chain management. The quality tool used to recognize these issues is data collection sheet which gathers the essential information to have the capacity to answer any inquiries that may emerge. The quintessence of the data is that the reason for existing is apparent and that the data mirrors the fact of the matter, is anything but difficult to gather and utilize. A quantitative method was used to gather the data. The data collection sheet is being used in the dispersion of factors of the articles delivered, classification of broken things locating of the defects of the pieces, recognizing reasons for deformities and verification check or support undertakings (Awad, 2012).
To arrive at the problem, a questionnaire was utilized together with the data collection sheet. There was a clear framework on how data was to be gathered and with what sort of document will be made and how the gather data ought to be utilized. How the data will be analyzed was additionally sketched out, and the individual who ought to oversee gathering the data was distinguished. For the optimization of data accumulation, Sharp and McDermott (2009) suggest that the data collection be finished by an experienced auditor and in an arbitrary example of the exercises, of the general population and of the groups of the territories that they ought to be observed. The association recognized an experienced auditor was utilized to gather data and examine it since they have a high likelihood of giving precise data.
Stakeholder Analysis
Commonly, as an expert, one needs to think the what before the who when confronting an undertaking. Along these lines, they underscore the deliverables instead of the general population. The principal procedure of the communication knowledge zone, to be executed in the initiation of the task, is to identify the individuals with an interest.
One method for this procedure is partner examination, which a procedure of systematically collecting and investigating quantitative and qualitative data with the end goal to figure out what premiums specific must be considered all through the venture. It permits to recognize the interests, expectations, and impact of the interested individuals and relates them to the motivation behind the underta.
Thinking tool to facilitate the understanding and decision on the intent, baseline, gaps, action and resources in planning strategically e-services in government.
how good quality qualitative data analysis (QDA) can help you identify impacts of your
programs to better meet your objectives and the needs of the community
the steps involved in undertaking basic QDA, including repeated reading, analysis and
interpretation
the value of involving others in the QDA process
the difference between description and interpretation
the value of seeking feedback on your analysis and using triangulation to increase thetrustworthiness of findings
Data driven culture in startups (2013 report)Geckoboard
Results of a global survey of 368 startups carried out by Geckoboard and Econsultancy into startup organisations use data to drive their business and establish a culture.
The implementation 'black box' and evaluation as a driver for change. Presentation by Katie Burke and Claire Hickey of the Centre for Effective Services.
Similar to Urban Institute & CWW- Nonprofit Performance Indicators (20)
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Delivering Micro-Credentials in Technical and Vocational Education and TrainingAG2 Design
Explore how micro-credentials are transforming Technical and Vocational Education and Training (TVET) with this comprehensive slide deck. Discover what micro-credentials are, their importance in TVET, the advantages they offer, and the insights from industry experts. Additionally, learn about the top software applications available for creating and managing micro-credentials. This presentation also includes valuable resources and a discussion on the future of these specialised certifications.
For more detailed information on delivering micro-credentials in TVET, visit this https://tvettrainer.com/delivering-micro-credentials-in-tvet/
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Urban Institute & CWW- Nonprofit Performance Indicators
1. Building a Common Outcome Framework
to Measure Nonprofit Performance
December 2006
The Center for What Works
The Urban Institute 3074 West Palmer Boulevard
2100 M Street, NW Chicago, Illinois 60647
Washington, D.C. 20037 (773) 398-8858
(202) 833-7200
2. Preface
It seems clear that nonprofit organizations need to regularly collect information on the
effect of their services, if they want to continue to attract funds from foundations, government,
and individual donors. And even more important, these data can help them manage their
resources to maximize the services they provide and continuously improve their offerings.
With funders increasing pressure to set up measurement systems, sometimes the worse
case scenario has emerged—nonprofits with multiple projects and multiple funders have to deal
with different requirements for tracking outcomes for similar programs. If agreement on a
common core set of outcome indicators can be reached, then outcome reporting can be efficient
and focused. Even more important, successful practices could be identified across similar
programs and organizations and then shared so that outcomes could be improved.
The work described in this report first provides suggested core indicators for 14
categories of nonprofit organizations and then expands the notion of common core indicators to a
much wider variety of programs by suggesting a common framework of outcome indicators for
all nonprofit programs. This can provide guidance to nonprofits as they figure out what to
measure and how to do it and will work to ease the looming reporting nightmare that will occur
unless a common framework for outcome measurement emerges. Further research is needed to
further test and revise the existing core indicators for the selected programs, add core indicators
for more program areas, and expand and revise the common framework for more general
guidance.
We hope the initial material presented here will be helpful and act as a catalyst for further
work in this crucial area.
Elizabeth T. Boris, Director
Center for Nonprofits and Philanthropy
The Urban Institute
2
3. Building a Common Outcome Framework
To Measure Nonprofit Performance
Introduction
For most stakeholders in the nonprofit sector, measuring performance is elusive. Nonprofit
managers and staff, funders, board members, potential clients, and members of the public seeking
information are often frustrated by lengthy academic evaluations and complex, meaningless
statistical analysis. At the same time, there is increasing pressure on nonprofits to account for and
improve results. Although classic program evaluation is one response, practitioners and funders also
need the tools, capacity, and standards to track and measure their own performance.
With little actual information, practitioners base decisions primarily on narrative annual
reports, anecdotes, related social science research and journal articles, IRS Forms 990, and
administrative metrics (such as the percentage of budget spent on administration or fundraising).
Often, information from these sources is not timely, offers little analytical or predictive value and is
hard to aggregate or synthesize to help improve services. It is, therefore, of limited value to the staff
members actually delivering services.
While the concept of measuring performance is not new,1 the development of practical ways to
implement actual measures is. Progress in understanding how to think about performance has
been made. For example, there are many handbooks on outcome measurement, logic models,
rating services, and assessment tools, but how much performance data have actually been
collected and used? Citing the diversity of nonprofit work, some scholars have even concluded that
systemically measuring impact in the nonprofit sector is impossible. 2 A convergence of forces,
however, including increased government oversight, the call for greater accountability from various
stakeholders, more professional nonprofit management, and competition for funding is accelerating
the need to overcome barriers to measurement. In addition, advances in computer technology now
permit performance data to more easily be collected and processed. 3
Some of the impetus for enhancing accountability for nonprofits and their performance
comes as a response to recommendations to the Senate Finance Committee by the Panel on the
Nonprofit Sector (established by Independent Sector) in May 2005. The Panel recommended
that, as a best practice, charitable organizations establish procedures for measuring and
evaluating their program accomplishments based on specific goals and objectives. In addition,
the Panel recommended a sector-wide effort to provide information and training focused on
appropriate methods for program evaluation.
1
See, for example, Measuring the Impact of the Nonprofit Sector, Patrice Flynn and Virginia Hodgkinson, eds, Kluwer Academic
Press, 2001; “Measuring Program Outcomes: A Practical Approach, United Way of America, 1996; and “Why Measure
Performance?” by Robert D. Behn, Harvard University, Kennedy School of Government, January 22, 2002.
2
Paul DiMaggio, “Measuring the Impact of the Nonprofit Sector on Society is Probably Impossible but Possibly Useful” in Measuring
the Impact of the Nonprofit Sector
3
“Performance Measurement: Getting Results,” interview with Harry Hatry, the Urban Institute at
http://www.urban.org/pubs/pm/author.html
3
4. While it appears unlikely that there will be detailed federal legislation that calls for
performance reporting, stakeholders are paying attention to assessing effectiveness and the need
for improved measurement and tracking of nonprofit outcomes. Having a standard framework
for developing outcomes and indicators can help create important tools for the sector to better
communicate the value of its services.
About the Common Outcome Framework Project
The Urban Institute and its project partner, The Center for What Works, collaborated
from June 2004 through May 2006 to identify a set of common outcomes and outcome indicators
or “common framework” in the measurement of performance for nonprofits. 4 The work began
based on a recognition that nonprofit organizations often have limited capacity or resources for
collecting, analyzing, and using data to inform practice. However, funders are increasingly
demanding such practice. This project has attempted to identify a more standardized approach
for nonprofits, themselves, as well as the organizations that choose to fund their efforts.
To meet this need, the research team selected, and then examined, 14 separate program
areas as to their missions, the outcomes they sought, and potential outcome indicators for
tracking progress towards these programs’ missions. The programs of nonprofit organizations
almost always have multiple outcomes and require a number of outcome indicators—both those
that measure “intermediate” (usually early) outcomes and those that measure “end” outcomes.
The team developed sample “outcome sequence charts” for each of the 14 programs to portray
the sequence of these outcomes.
The 14 programs included in this project represent only a small proportion of the great
variety of programs that exist. Therefore, as a final task, we developed a common framework for
outcomes, one that might provide other programs with a starting point for identifying outcomes
and outcome indicators for themselves.
We hope that this guidance can help nonprofit organizations reduce their time and cost
of implementing an outcome measurement process and improve its quality.
With improved and more consistent reporting from grantees, funders, too, would be
better able to assess and compare the results of their grants.
An outcome sequence chart for the project is shown as Exhibit 1.
4
Project support was provided primarily from the Hewlett Foundation with additional support from the Cisco and
Kellogg Foundations.
4
5. Outcome Sequence Chart: Creating a Common Framework for Measuring Performance
Identify, classify Better performance Smarter decisions
and disseminate indicators and Better about allocation
relevant, useful outcome data benchmarks of resources,
performance and stronger
indicators for comparisons Better data management,
nonprofit work across on what improved
programs and works program design
Identification of organizations through
common program identification of
and organizational effective practices
indicators across
program areas
Better
organizational
efficiency
Better Organizations
resources for fulfill their
grant makers missions
Clients are
More effective better off
and impactful
programs
About This Report
This project description has been prepared so that the current results can be used as a
resource for nonprofit organizations and their funders. Although the materials presented are not
complete—without the necessary next steps of testing, refining, and expanding to more program
areas—we feel they can offer guidance and help nonprofits and grantmaking organizations in
developing their outcome measurement programs.
The information is presented in four parts:
Part 1: Project Approach
Part 2: Candidate Outcomes, Outcome Indicators, and Outcome Sequence charts for
Specific Programs
Part 3: Draft Common Outcome Framework
Part 4: Tips on Using the Common Framework Project Materials
5
6. Part 1
Project Approach
While there is no shortage of outcomes and their indicators in some program areas, there is
no centralized grouping of them or assessment of their quality that could serve as a resource for
organizations that wish to develop outcome measurement systems. And because of the vast
range of programs in the social sector, major gaps exist in the outcome indicators that have been
developed. This project took a first step in attempting to provide a resource for quality indicators
and also provide guidance for nonprofits on the development of good indicators, if indicators for
their specific programs are not yet available.
First, we chose a number of specific program areas and identified program outcomes and
indicators already in use and/or recommended. Outcomes are defined as the results of a program
or service that is of direct interest and concern to customers of the program. Outcomes are
distinguished from program outputs, which while important to the program, are primarily of
internal use and not of direct concern to customers (such as the number of training sessions
provided to staff).
It is often difficult to measure outcomes directly, so many indicators are proxies. For
example, while tracking the avoidance of a certain kind of behavior can be difficult, a client can
be tested about a level of knowledge about why someone should avoid that behavior. However,
evidence that the degree to which increased knowledge leads to the desired change in behavior
must be strong before this increased knowledge is deemed a “good” indicator of the desired
change in behavior.
Information was collected from a wide range of sources, from national nonprofit umbrella
groups in the US, national accreditation agencies in specific fields, and from national nonprofits
with local affiliates. Outcomes and outcome indicators were assessed as to which ones were
useful, relevant, and feasible. It is important to consider outcome information that is not usually
being currently collected but should be. A highly useful basis for developing quality indicators
we found to be outcome sequence chart (based on the logic model format) for the program—the
sequence of a program’s outputs, intermediate (earlier) outcomes, and the ultimate desired end
outcomes? Once the desired outcomes are identified, with the help of the outcome sequence
charts, appropriate outcome indicators for measuring progress toward those outcomes can then
be identified.
Basic criteria for quality indicators included ones that were: specific (unique, unambiguous);
observable (practical, cost effective to collect, measurable); understandable (comprehensible);
relevant (measured important dimensions, appropriate, related to program, of significance,
predictive, timely); time bound (covered a specified period of time); and valid (provided reliable,
accurate, unbiased, consistent, and verifiable data)
The characteristics of a successful taxonomy or common framework were also reviewed.
The most useful tend to reflect the manner in which the sector organizes, collects, and reports the
6
7. information. Although essential principles of comprehensiveness, mutual exclusivity of
elements, and logical consistency must be followed, there must be a grounding in what is
actually in use by practitioners and what has worked for the specific program areas. Thus,
testing by stakeholders (including nonprofit staff; funders, both public and private; clients,
participants, and service users; and even the public, where appropriate) is vital.
Outcomes and indicators were collected for fourteen different program areas to help inform
the development of the common framework. Lists of quality outcomes and their indicators were
selected for program areas ranging from emergency shelter to youth mentoring to health risk
reduction programs. The 14 program areas are listed in Part 2. The outcomes identified for
these 14 programs were then reviewed for common elements, which then became the basis of the
draft of the common framework, described in Part 3.
The project efforts and products to date represent the completion of our first phase of work.
More work is highly desirable to further refine, test, and expand the outcomes framework to
increase its relevance and maximize its potential utility for the sector. The following tasks
represent next steps:
Development of an interactive website tool, including references to sample data
collection instruments and protocols, “build-your-own” outcome sequence charts, etc.
Linking the outcome indicators to actual existing data collection instruments, such as
questionnaires or interview protocols would considerably increase the helpfulness of this
material.
Expanding the number of program areas for which candidate outcomes, outcome
indicators, and outcome sequence charts are available on the Internet.
Refinement of the outcomes framework by adding common indicators.
Developing program performance outcomes and indicators for internal organizational
strategy, including: management effectiveness, financial sustainability, and community
engagement.
7
8. Part 2
Candidate Outcomes, Outcome Indicators, and Outcome Sequence
Charts for Specific Programs
The 14 nonprofit program areas selected for detailed analysis emphasized health and human
services but also included some programs that extend beyond the typical client-centered services
to broader community outcomes and interests. They include the following:
Adult Education and Family Literacy
Advocacy
Affordable Housing
Assisted Living
Business Assistance
Community Organizing
Emergency Shelter
Employment Training
Health Risk Reduction
Performing Arts
Prisoner Re-entry
Transitional Housing
Youth Mentoring
Youth Tutoring
For each, we developed a program description, an outcome sequence chart, and detailed
spreadsheets with associated program outcomes and outcome indicators, described in more detail
below. This material for each of the above 14 program areas is provided at
http://www.urban.org/center/cnp/commonindicators.cfm
Program description: A short paragraph illustrating the types of programs included.
Includes the scope or coverage of activities included or excluded from consideration.
Appears on both the outcome sequence chart and the indicators spreadsheet for each
program area.
Outcome sequence chart: A visual depiction of the order in which program outcomes are
expected to occur.
Connects outcomes with directional lines and arrows to indicate the expected sequence of
key results. Intended to provide a quick one-page snapshot for each program area,
whereas the outcome indicator tables (described below) offer a more comprehensive
review and detail. The outcome sequence chart for the project, provided above, is an
example.
8
9. Comments, caveats, and suggestions about the program outcome sequence charts include the
following:
The focus is exclusively on program results—intermediate or end outcomes. Some
diagrams cover program inputs, activities, and outputs that occur internally and at the
earlier stages of a program. The focus here on outcomes is deliberate. We hope that
nonprofit organizations will concentrate more of their (often limited) data collection
efforts on measuring and reporting program results, rather than merely counting internal
activities (for example, number of staff training sessions held) or outputs (number of
pamphlets produced or distributed) that are increasingly of less interest to stakeholders
such as foundations or the public. 5
The charts organize outcomes from left (first to occur) to right (later to occur). Boxes at
the left are usually intermediate outcomes, which tend to be realized sooner than end or
final outcomes, on the right. In some cases, the end outcomes may take so long to
achieve that they may appear to be beyond the scope of the program to track or imagine
claiming responsibility. While these are valid concerns, we chose to include such longer-
term, end outcomes to illustrate the ideal or ultimate goal for program participants or
other recipients of services, such as the broader community.
Participant satisfaction is a very important, but sometimes overlooked element of
program performance. While some debate about terminology (for example, are factors
such as timeliness or ease of service an outcome or better grouped as separate indicators
of quality?), program satisfaction is of interest and should be measured by most
providers. Satisfaction indicators tend to be similar across program areas, so they are
included in a box below the chart. This is a reminder to include one or several such
measures in the overall measurement framework.
The outcome sequence charts are limited in their ability to fully illustrate the dynamic
and sometimes circular nature of many programs. Because they are identifying key
outcomes for a program area, they are intentionally somewhat generic. Most charts
illustrate a series of outcomes on a continuum that proceeds from left to right and are
connected by a series of forward arrows. In some cases, however, we attempted to
illustrate exceptions or the more circular nature of results by using dotted lines, arrows
pointing in both directions, or stacked boxes (intended to reflect a certain equality among
outcomes, rather than a rank-order or intended sequence.)
We consulted many sources in the production of these charts and subsequent sets of
indicators. The key written sources of materials are noted at the bottom of outcome
5
We make a limited number of exceptions to this rule by including a lead box to illustrate the position and relative
nature of outputs in relation to recommended program outcomes. See for example, affordable housing, advocacy, or
performing arts. The inclusion of these references were made at the recommendation of program reviewers who
argued that it was important to at least acknowledge that without such outputs being produced by the program there
would never be cause or reason to track or measure subsequent outcomes.
9
10. sequence charts and also at the end of the tables of outcome indicators. We encourage
users to locate and consult these references for additional examples of indicators, and for
information on data collection strategies, program context, etc. In addition to reviewing
numerous written sources for each program area, we consulted with project advisors and
content experts for each program area. These individuals provided essential feedback
and helped us to refine the charts as they are currently presented.
Finally, these outcome sequence charts are intended to be a starting point for
organizations establishing outcome measurement processes. These charts are not
intended to be comprehensive, but rather to identify outcomes and associated indicators
that meet important selection criteria and have been vetted by experts in the field. They
almost always should be modified by a nonprofit organization so as to better meet the
needs of the organization.
Table of Outcomes and Outcome Indicators: Provides detailed information for each outcome
and outcome indicator identified for a particular program area.
Each indicator is accompanied by a suggested data collection strategy, explanatory notes
(where appropriate), as well as a suggested classification as an intermediate or end
outcome indicator.
Comments, caveats, and suggestions about the outcome indicator tables include the following:
The outcomes and outcome indicators are expected to be key result areas of interest for
many if not all nonprofits for this particular program area.
One or more candidate specific outcome indicators are included for each program
outcome. Outcome indicators are expressed in a measurable format (such as a number
and/or percent) and attempt to capture and report measures of the program outcome.
A suggested data collection procedure for obtaining data for each outcome indicator is
included. Having a sound practical data collection procedure is vital to obtaining the
outcome data. More than simply offering a framework for consideration and discussion,
we hope these materials can readily be incorporated into planned or on-going
management and data reporting efforts.
Notes providing additional details or caveats related to specific outcome indicators are
included on the spreadsheets. Often they provide suggestions for important client groups
that might be considered individually at the stage of data analysis and reporting.
As noted earlier, the table of outcome indicators and outcome sequence charts for the 14 program
areas examined during this project can be found at
http://author.urban.org/center/cnp/commonindicators.cfm and http://www.whatworks.org.
10
11. Part 3
Draft Common Outcome Framework
A common outcomes framework provides an organized, generalized, set of outcomes and
outcome indicators that nonprofit programs can use to help them determine the outcomes and
outcome indicators appropriate for their service programs.
This draft was developed using basic classification principles and the extensive
information gathered during the development of the outcomes and indicators for the 14 specific
program areas described in Part 2. To develop the common outcomes framework, we reviewed
the outcomes and outcome indicators for these specific program areas to identify those that
appeared to be applicable across multiple program areas.
The framework has these major purposes:
• It provides a starting point for programs to begin developing their own outcome
measurement process.
• For programs that already have some form of outcome measurement process, it
provides a checklist for reviewing their coverage to determine whether other
outcomes and/or outcome indicators should be included in their outcome
measurement process.
• To the extent that nonprofit organizations use such common outcome indicators,
this will provide an opportunity for across-program comparisons, enabling each
nonprofit organization to benchmark itself against other organizations that are
providing similar services.
Programs often have similar goals. For example, many different types of programs seek
to change knowledge, attitudes, behavior, and status or condition of clients/participants. Many
different types of programs seek to achieve the same quality-of-service elements. If the types of
outcome information collected across a wide number of targeted program areas are collected,
reviewed for quality, and grouped by program area, the results are likely to be useful to those and
other nonprofits providing similar services.
Such an arrangement of outcomes with associated indicators can become a standard
framework that provides guidance and context, helping users learn what they need to know. For
example, although much information on program outcomes is available from a web-based key
word search, the results are likely to be undifferentiated—overwhelming in volume and time
consuming to assess for relevance. And the search results might vary significantly if different
key terms were chosen for the search.
The development and refinement of the common framework should continue to be an
iterative process, as outcomes and indicators are collected for even more programs. An excerpt
11
12. from the common framework is presented below. It includes program-centered outcomes (reach,
participation, satisfaction); participant- centered outcomes (knowledge/learning/attitude,
behavior, condition/status); community-centered outcomes (policy, public health/safety, civic
participation, economic, environmental, social); and organization-centered outcomes (financial,
management, governance). Little work has been completed on the organization-centered
outcomes. The full version of the current draft framework can be found at
http://author.urban.org/center/cnp/commonindicators.cfm and http://www.whatworks.org.
12
13. Common Framework of Outcomes
Excerpt: Participant-Centered Outcomes
1) Knowledge/Learning/Attitude
a) Skills (knowledge, learning)
Common Indicators: Percent increase in scores after attending
Percent that believe skills were increased after attending
Percent increase in knowledge (before/after program)
b) Attitude
Common Indicators: Percent improvement as reported by parent, teacher, co-worker,
other
Percent improvement as reported by participant
c) Readiness (qualification)
Common Indicators: Percent feeling well-prepared for a particular task/undertaking
Percent meeting minimum qualifications for next
level/undertaking
2) Behavior
a) Incidence of bad behavior
Common Indicators: Incidence rate
Relapse/recidivism rate
Percent reduction in reported behavior frequency
b) Incidence of desirable activity
Common Indicators: Success rate
Percent that achieve goal
Rate of improvement
c) Maintenance of new behavior
Common Indicators: Number weeks/months/years continued
Percent change over time
Percent moving to next level/condition/status
Percent that do not reenter the program/system
3) Condition/Status
a) Participant social status
Common Indicators: Percent with improved relationships
Percent who graduate
Percent who move to next level/condition/status
Percent who maintain current level/condition/status
Percent who avoid undesirable course of action/behavior
13
14. b) Participant economic condition
Common Indicators: Percent who establish career/employment
Percent who move to long term housing
Percent who maintain safe and permanent housing
Percent enrolled in education programs
Percent who retain employment
Percent with increased earnings
c) Participant health condition
Common Indicators: Percent with reduced incidence of health problem
Percent with immediate positive response
Percent that report positive response post-90 days
14
15. Part 4
Tips on Using the Common Framework Project Materials
Outcome information seldom, if ever, tells why the outcomes have occurred. Your
program will seldom be 100 percent responsible for those outcomes. Inevitably, other
factors, both external and internal, will affect outcomes. However, outcome information
is vital for indicating what needs to be done to improve future outcomes. Your choice of
outcome indicators to track should not be determined by the extent of your influence over
the outcome but the importance of the outcome for your clients.
Outcome data should be used to identify where results are going well and where not so
well. When not going well, the program needs to attempt to find out why. This process is
what leads to continuous program learning and program improvement.
Outcome information is much more useful if the measures are tabulated for various
categories of customers/clients, for example, by gender, age group, and race/ethnicity,
income level, etc.
It may be wise to start tracking only a very small number of the indicators, especially if
you have had only very little experience with such data collection and have very limited
resources. Not all outcomes or indicators listed will be relevant to every organization.
Once your organization becomes more comfortable with outcome measurement, then
more outcomes and indicators can be added to the system.
Review the list of outcome indicators for the program that most closely matches, but also
check out the common framework to see if the more general set suggests other relevant
indicators.
Selecting which outcomes and indicators to monitor is crucial. Sessions with staff and
board members, and perhaps clients, to discuss what outcomes and outcome indicators
your program should monitor will be important and will keep all aware of the outcome
measurement efforts. The staff and board members will be the persons most able to use
the findings to improve services.
Some of the most important client outcomes and outcome indicators will require new
data collection procedures (such as determining the extent to which improved client
conditions have been sustained for at least, say 6 or 12 months, after service to the client
has been completed). Nonprofit organizations should not give up too quickly on
implementing such data collection procedures. Often, surprisingly inexpensive
procedures can be used, especially if the program has any type of aftercare process.
15
16. Additional resources:
• Urban Institute Series on Outcome Management for Nonprofit Organizations:
− “Key Steps in Outcome Management” by Harry P. Hatry and Linda M. Lampkin
(http://www.urban.org/publications/310776.html)
− “Finding Out What Happened to Former Clients” by Ritu Nayyar-Stone and Harry P.
Hatry (http://www.urban.org/publications/310815.html)
− “Developing Community-wide Outcome Indicators for Specific Services” by Harry P.
Hatry, Jake Cowan, Ken Weiner and Linda M. Lampkin
(http://www.urban.org/publications/310813.html)
− “Surveying Clients about Outcomes” by Martin D. Abravanel
(http://www.urban.org/publications/310840.html)
− “Analyzing Outcome Information” by Harry P. Hatry, Jake Cowan and Michael
Hendricks (http://www.urban.org/publications/310973.html)
− “Using Outcome Information” by Elaine Morley and Linda M. Lampkin
(http://www.urban.org/publications/311040.html)
• The Center for What Works Performance Measurement Toolkit and other tips, tools,
resources and training
(http://www.whatworks.org/displaycommon.cfm?an=1&subarticlenbr=13%20)
• 2006 Performance Measurement: Getting Results, forthcoming, 2nd edition by Harry Hatry
(http://www.urban.org/books/pm/chapter1.cfm);
• “Measuring Program Outcomes: A Practical Approach” by the United Way of America
(http://national.unitedway.org/outcomes/index.cfm)
• Ten Steps to a Results-Based Monitoring and Evaluation System,” 2004 by World Bank
• “Guidebook for Performance Measurement, 1999 by Turning Point”
• Boys & Girls Club of America, “Youth Development Outcome Measurement Tool Kit
(http://www.bgca.org/)
• Benchmarking for Nonprofits: How to Measure, Manage, and Improve Performance by
Jason Saul (http://www.fieldstonealliance.org/productdetails.cfm?PC=52)
16
17. Appendix
Common Measures Advisory Committee Members
Audrey R. Alvarado, Executive Director Mark Moore, Director
National Council of Nonprofit Associations The Hauser Center for Nonprofit Organizations
Patrick Corvington, former Executive Director Margaret C. Plantz
Innovation Network, Inc. United Way of America
Kathleen Enright, Executive Director James Saunders
Grantmakers for Effective Organizations The Evaluation Center
Kathleen Guinan, Executive Director Laura Skaff, Director of Evaluation
Crossway Community Volunteers of America
Susan Herr, Managing Director Ken Voytek
Community Foundations of America Goodwill Industries International, Inc.
Amy Coates Madsen, Program Director Dennis R. Young
Standards for Excellence Andrew Young School, Georgia State
Ricardo Millet, former President
The Woods Fund of Chicago
Common Measures Project Staff
The Urban Institute The Center for What Works
Linda Lampkin Debra Natenshon
Mary Winkler Jason Saul
Janelle Kerlin Julia Melkers
Harry Hatry Anna Seshadri
17