This document introduces scales to measure qualitative results of development work. Scales allow assessment of outcomes like empowerment, resilience, and institutional capacity. They describe performance levels along a ladder or phases. Developing scales participatorily helps ensure stakeholder values are considered. Scales can systematically monitor qualitative changes among groups and aggregate diverse data better than counting activities or collecting testimonies alone. They provide a fuller picture of situations than single indicators.
A theory of change is a purposeful model of how an initiative—such as a policy, a strategy, a program, or a project—contributes through a chain of early and intermediate outcomes to the intended result. Theories of change help navigate the complexity of social change.
This document discusses a study on how to increase influence and drive behavioral change. The key finding is that those who use all six sources of influence - personal motivation, personal ability, social motivation, social ability, structural motivation, and structural ability - are up to 10 times more successful at creating sustainable change.
The study examined organizational change initiatives and personal challenges. It found that while many rely on just one influence strategy like training, those who apply four or more strategies combining individual, social, and structural sources are much more likely to succeed.
The document outlines the six sources of influence and provides case studies of organizations that successfully drove change by applying multiple, aligned sources of influence rather than relying on any single approach.
This document discusses various developmental assessment scales used to evaluate children's development. It outlines four key areas of development - gross motor, fine motor, personal-social, and language. Several screening tools are described that assess developmental milestones in these domains for children from birth to school age. These include the Denver Developmental Screening Test, Gesel Development Schedule, Bayley Scales of Infant Development, and Trivandrum Developmental Screening Test. The document also discusses tools to measure intelligence such as Goodenough's Draw a Man Test, Stanford-Binet Intelligence Scale, and Wechsler Intelligence Scale for Children.
I wanted to share some insight on one of the most challenging aspects of Grant Making. Measuring outcomes has proven to be challenging, but there is away to accomplish your goals to make the world a better place. Salesforce has put together a deck that allows stakeholders in this space the ability to develop a roadmap for success with the ability to iterate on those measurements to consistently improve outcomes.
This document outlines an evidence-based advocacy training for staff and partners of SOS CV-Uganda. The training objectives are to enhance understanding of evidence-based advocacy and strengthen capacities in designing and implementing effective advocacy initiatives. It defines key advocacy concepts like advocacy, evidence-based advocacy, and the advocacy process. The advocacy process involves situation analysis, evidence collection and organization, identifying goals and issues, constituting an advocacy team, developing messages, and disseminating messages to stakeholders. Guiding questions are provided to guide each stage of the advocacy process.
This document presents a discussion paper that explores whether feedback from intended aid recipients ("constituents") leads to better development outcomes, or whether feedback is the "smart" thing for aid organizations to pursue. The paper outlines several theoretical pathways through which feedback could improve outcomes, such as increasing knowledge, facilitating learning, and boosting adoption of programs. It then reviews limited empirical evidence that in some cases feedback has produced measurable impacts, such as reductions in child mortality and better educational outcomes, though many studies fail to demonstrate effects. The paper concludes that feedback shows promise but more work is needed to understand when and how it can be effective in different contexts.
This document discusses approaches to measuring social impact and outcomes. It emphasizes the importance of:
1) Understanding the full story and theory of change behind social initiatives, not just counting outputs.
2) Engaging stakeholders in identifying objectives and indicators to measure performance and impacts on people, the economy and environment.
3) Using measurement as an ongoing process of learning, improvement and accountability rather than just reporting, through tools like social accounting, balanced scorecards and impact mapping.
A theory of change is a purposeful model of how an initiative—such as a policy, a strategy, a program, or a project—contributes through a chain of early and intermediate outcomes to the intended result. Theories of change help navigate the complexity of social change.
This document discusses a study on how to increase influence and drive behavioral change. The key finding is that those who use all six sources of influence - personal motivation, personal ability, social motivation, social ability, structural motivation, and structural ability - are up to 10 times more successful at creating sustainable change.
The study examined organizational change initiatives and personal challenges. It found that while many rely on just one influence strategy like training, those who apply four or more strategies combining individual, social, and structural sources are much more likely to succeed.
The document outlines the six sources of influence and provides case studies of organizations that successfully drove change by applying multiple, aligned sources of influence rather than relying on any single approach.
This document discusses various developmental assessment scales used to evaluate children's development. It outlines four key areas of development - gross motor, fine motor, personal-social, and language. Several screening tools are described that assess developmental milestones in these domains for children from birth to school age. These include the Denver Developmental Screening Test, Gesel Development Schedule, Bayley Scales of Infant Development, and Trivandrum Developmental Screening Test. The document also discusses tools to measure intelligence such as Goodenough's Draw a Man Test, Stanford-Binet Intelligence Scale, and Wechsler Intelligence Scale for Children.
I wanted to share some insight on one of the most challenging aspects of Grant Making. Measuring outcomes has proven to be challenging, but there is away to accomplish your goals to make the world a better place. Salesforce has put together a deck that allows stakeholders in this space the ability to develop a roadmap for success with the ability to iterate on those measurements to consistently improve outcomes.
This document outlines an evidence-based advocacy training for staff and partners of SOS CV-Uganda. The training objectives are to enhance understanding of evidence-based advocacy and strengthen capacities in designing and implementing effective advocacy initiatives. It defines key advocacy concepts like advocacy, evidence-based advocacy, and the advocacy process. The advocacy process involves situation analysis, evidence collection and organization, identifying goals and issues, constituting an advocacy team, developing messages, and disseminating messages to stakeholders. Guiding questions are provided to guide each stage of the advocacy process.
This document presents a discussion paper that explores whether feedback from intended aid recipients ("constituents") leads to better development outcomes, or whether feedback is the "smart" thing for aid organizations to pursue. The paper outlines several theoretical pathways through which feedback could improve outcomes, such as increasing knowledge, facilitating learning, and boosting adoption of programs. It then reviews limited empirical evidence that in some cases feedback has produced measurable impacts, such as reductions in child mortality and better educational outcomes, though many studies fail to demonstrate effects. The paper concludes that feedback shows promise but more work is needed to understand when and how it can be effective in different contexts.
This document discusses approaches to measuring social impact and outcomes. It emphasizes the importance of:
1) Understanding the full story and theory of change behind social initiatives, not just counting outputs.
2) Engaging stakeholders in identifying objectives and indicators to measure performance and impacts on people, the economy and environment.
3) Using measurement as an ongoing process of learning, improvement and accountability rather than just reporting, through tools like social accounting, balanced scorecards and impact mapping.
Pesuading Policy Makers: Effective CIT Program Evaluation and Public Relations citinfo
This document summarizes a presentation on effectively evaluating community intervention team (CIT) programs and persuading policy makers to support CIT. It discusses becoming familiar with the policymaking process, building relationships with policy makers, using data from program evaluations, and involving the community. Program evaluations should assess needs, collect data on outcomes for participants and first responders, and measure changes in costs and the criminal justice system. Choosing a university or professional evaluator can help plan an evaluation to show a program's effectiveness.
The document discusses induction and development in the workplace. It states that the length of time an employee stays in a role is influenced by how long they have already been in the organization. New employees are most at risk of leaving during the first 12 months. Good recruitment and selection aims to stabilize this risk period and help new employees integrate into their roles. A planned induction program provides assistance to help new employees settle in while being productive. Competency-based selection provides clear strengths and weaknesses that make induction planning easier. Measuring individual and organization performance can help evaluate the effectiveness of selection techniques.
This document discusses the need for knowledge and evidence about sustainability and public health interventions to enhance shared agency and collective action. It makes three key points:
1. Knowledge and evidence are not static things but dynamic social processes of generation and mobilization that are entangled with action.
2. Interventions involve complex mixed mechanisms that are biological, social, political, technical, and cultural in nature and how they interact in environments.
3. Research should consider agency at different levels from the start to empower various actors, like researchers, citizens, patients, and policymakers, in different contexts. The "how" of interventions depends on understanding the "who" that is involved.
HFG Health Governance Presentation at 2015 USAID Global Health Mini-UniversityHFG Project
Presentation titled "Governance in the Third Dimension: Science Fiction or Science Fact," given by HFG at 2015 USAID Global Health Mini-University on March 2, 2015.
Session Description: Strengthening health governance can significantly improve the effectiveness and sustainability of reforms and, in turn, achieve better health system performance. Yet despite its importance, health governance investments are often overlooked. Health governance is frequently misunderstood by governments and the global health community, because governance in practice (vs. theory) is poorly defined and difficult to operationalize.
In this session, participants will learn how Haiti has defined and is addressing dimensions of governance for health financing and human resource reforms. Participants will apply these dimensions of health governance to work/activities that they are involved in, and consider how addressing these dimensions can strengthen health governance in their countries and enhance the impact of health financing, human resource, and service delivery reforms.
Campaign planning workshop for the Women's International League for Peace and Freedom (WILPF). Presented at the International Board meeting at Gujurat University, Ahmedabad, India on 5 January 2010
Realist Evaluation : some thoughts about the theoretical foundationsvaléry ridde
This document discusses the theoretical foundations of realist evaluation. It outlines some key concepts in realist evaluation including mechanisms, contexts, and outcomes (CMO) configurations. Realist evaluation is based on critical realism and seeks to understand what works for whom in what circumstances by examining the underlying mechanisms through which programs and interventions bring about change. The document provides examples of CMO configurations and how initial and final middle-range theories can be developed through realist evaluation. It also discusses challenges around defining mechanisms and conceptual layers in realist evaluation.
Running head PSYCHOLOGY1PSYCHOLOGY7Programmatic pur.docxtoltonkendal
Running head: PSYCHOLOGY
1
PSYCHOLOGY
7
Programmatic purposes and outcomes
Shekima Jacob
South University
Programmatic purposes and outcomes
Select and discuss three programmatic purposes and outcomes that should be evaluated. In your discussion, provide the rationale for the purposes and outcomes selected. It will be assumed the purposes and outcomes selected were influenced by the program being evaluated.
The program that I will be discussing is human service programs. In the abiding endeavor to enhance human service programs, service providers, policy makers and funders are more and more recognizing the significance of thorough program evaluations. They want to know what the programs achieve, what they spend, and how they must be operated to attain maximum cost efficiency. They want to recognize which programs function for which groups, and they need endings based on proof, as opposed to impassioned pleas and testimonials. The purposes should state the extensive, extensive range result that maintains the mission of the program, including content information areas, performance prospects, and values anticipated of program graduates. Purposes can be stated in wider and more stirring language than outcomes that have to be measurable and specific. Outcome is the reason nonprofit organizations struggle to build capacity and deliver programs. Measurement of outcomes is the systematic way of assessing the extent to which a program has attained its intended results.
The programmatic purposes and outcomes that should be evaluated include:
Programmatic purposes
· To monitor functions for the Health and Human Services department.
Without departments, the purpose or goals of human services would be very hard to fulfill. Human services is a very large sector that entails a wide range of skills, knowledge and disciplines focused on enhancing the well being of human both collectively and individually. Just like there are a lot of sectors in human services, so too there are a huge variety of functions of the human service programs that need to be evaluated so as to accomplish the purpose of the program (Connell, Kubisch, Schorr & Weiss, 1995). One of the programmatic purposes of human service programs is to monitor functions for the Health and Human Services department. Any department or even sector requires frequent checks to make sure that it is functioning well and according to the purpose. This purpose is very crucial in the execution of the human service program goals. It needs to be evaluated to make sure that the functions of the health and human service department are in line with the programmatic purposes of the program.
· Assessing internal control over compliance requirements to provide reasonable assurance.
The compliance requirements are very crucial in every program as they make sure that the program is in line with its goals and makes sure it works towards achieving its stipulated outcomes. This purpose needs to be evaluated to m ...
This document discusses the promise of building individual and organizational capacity through frameworks focused on executive functioning, resilience, and leadership development. It argues these frameworks hold potential for setting and achieving life goals if integrated in a holistic approach at the individual, community, agency, and partnership levels. While each framework has experts exploring its application, efforts are lacking to connect them and demonstrate how they could comprise a more powerful approach to development across populations and challenges.
SOCW 6311 wk 8 peer responses Respond to at least two collea.docxsamuel699872
SOCW 6311 wk 8 peer responses
Respond to at least two colleagues by doing all of the following:
Name first and references after every person
Indicate strengths of their needs assessment plan that will enable the needs assessments to yield support for the program that they want to develop.
Offer suggestions to improve the needs assessment plan in areas such as:
Defining the extent and scope of the need
Obtaining important information about the target population
Identifying issues that might affect the target population’s ability to access the program or services
Instructor wants lay out like this:
Respond to at least two colleagues ( 2 peers posts are provided) by doing all of the following:
Identify strengths of your colleagues’ analyses and areas in which the analyses could be improved.
Your response
Address his or her evaluation of the efficacy and applicability of the evidence-based practice,
Your response
[Evaluate] his or her identification of factors that could support or hinder the implementation of the evidence-based practice,
Your response
And [evaluate] his or her solution for mitigating those factors.
Your response
Offer additional insight to your colleagues by either identifying additional factors that may support or limit implementation of the evidence-based practice or an alternative solution for mitigating one of the limitations that your colleagues identified.
Your response
References
Your response
Peer 1: McKenna Bull
RE: Discussion - Week 8
COLLAPSE
Top of Form
Needs assessments are a form of research conducted to gather information about the needs of a population or a group in a community (Tutty & Rothery, 2010, p. 149). One purpose of a needs assessment is to explore in more depth whether a new program within an organization or agency is needed (Dudley, 2014, p. 117). Key questions of this type of needs assessment may revolve around: (1) whether there are enough prospective clients to warrant this type of program, (2) the different activities or programs that the respondents would be interested in using, priorities for some activities over others, (3) importance of the activities, and (4) times in which this program would be desired and used (Dudley, 2014, p. 117). Potential barriers for the implementation of a new program should also be assessed to ensure the best possible outcome. Some barriers to services could include factors such as: location, costs, potential need for fees, and possible psychological issues related to such things. The following is an assessment of an intensive outpatient program for youth, and a potential need that is currently being unmet.
Post a needs assessment plan for a potential program of your choice that meets a currently unmet need. Describe the unmet need and how current information supports your position that a needs assessment is warranted.
The intensive outpatient program (IOP) at Provo Canyon Behavioral H.
20200507_010443.jpg
20200507_010448.jpg
20200507_010502.jpg
20200507_010507.jpg
20200507_010513.jpg
20200507_010517.jpg
20200507_010522.jpg
20200507_010527.jpg
CHAPTER
1
Research in Public and Nonprofit Programs
The Basics
What do you think of when you hear the word research? Do you imagine a scientist working in a lab, an economist surrounded by computers and graphs, or a scholar surrounded by a pile of books? You are right; each is conducting research. But research goes well beyond the work of a scientist, an economist, or a scholar. Research may describe what you do when you decide what computer to buy, what jobs to apply for, or what music to download. It also describes what effective managers and influential stakeholders do when they monitor a program’s performance, determine its effectiveness, and assess the experiences and opinions of its clients. With information gathered from research, managers monitor their programs, publicize their successes, and identify opportunities for program improvement. Stakeholders may use the information to decide what programs to support and what policies to advocate for. Just as you sometimes make decisions, organizations make some decisions quickly with little information. Sometimes, however, decisions are made after gathering as much information as possible and organizing it. This latter approach to research is the subject of this text. We define research as the systematic gathering and analysis of empirical information to answer a question.
RESEARCH AND EFFECTIVE MANAGEMENT
Research and effective organizations go hand in hand. They both depend on openness, accountability, and a commitment to learning and change. Openness or transparency is a key organizational and research value, which adds to an organization’s reputation. Stakeholders want to know how an organization uses its resources. They want information on what the organization does and what it has achieved. Information on a program’s performance, its effectiveness, and how it is perceived all contribute to transparency. You, your staff, or a contractor may conduct research to gather information. Whoever conducts the research is expected to disclose what information they gathered, whom they gathered it from, and how they analyzed it. With full disclosure, management teams have the information they need to decide if and how they can act on the research findings.
Accountability may be thought of as an attribute of transparency. An accountable organization is open and produces evidence that it is meeting stakeholder expectations. Donors, funders, and clients may all want confirmation that a program is well managed and achieving results. You may collect data on a program’s clients, activities, services, and results. Information on clients and activities documents that the program reaches the target population and provides appropriate services. You can link the information on clients and activities to data that can be linked to results. .
20200507_010443.jpg
20200507_010448.jpg
20200507_010502.jpg
20200507_010507.jpg
20200507_010513.jpg
20200507_010517.jpg
20200507_010522.jpg
20200507_010527.jpg
CHAPTER
1
Research in Public and Nonprofit Programs
The Basics
What do you think of when you hear the word research? Do you imagine a scientist working in a lab, an economist surrounded by computers and graphs, or a scholar surrounded by a pile of books? You are right; each is conducting research. But research goes well beyond the work of a scientist, an economist, or a scholar. Research may describe what you do when you decide what computer to buy, what jobs to apply for, or what music to download. It also describes what effective managers and influential stakeholders do when they monitor a program’s performance, determine its effectiveness, and assess the experiences and opinions of its clients. With information gathered from research, managers monitor their programs, publicize their successes, and identify opportunities for program improvement. Stakeholders may use the information to decide what programs to support and what policies to advocate for. Just as you sometimes make decisions, organizations make some decisions quickly with little information. Sometimes, however, decisions are made after gathering as much information as possible and organizing it. This latter approach to research is the subject of this text. We define research as the systematic gathering and analysis of empirical information to answer a question.
RESEARCH AND EFFECTIVE MANAGEMENT
Research and effective organizations go hand in hand. They both depend on openness, accountability, and a commitment to learning and change. Openness or transparency is a key organizational and research value, which adds to an organization’s reputation. Stakeholders want to know how an organization uses its resources. They want information on what the organization does and what it has achieved. Information on a program’s performance, its effectiveness, and how it is perceived all contribute to transparency. You, your staff, or a contractor may conduct research to gather information. Whoever conducts the research is expected to disclose what information they gathered, whom they gathered it from, and how they analyzed it. With full disclosure, management teams have the information they need to decide if and how they can act on the research findings.
Accountability may be thought of as an attribute of transparency. An accountable organization is open and produces evidence that it is meeting stakeholder expectations. Donors, funders, and clients may all want confirmation that a program is well managed and achieving results. You may collect data on a program’s clients, activities, services, and results. Information on clients and activities documents that the program reaches the target population and provides appropriate services. You can link the information on clients and activities to data that can be linked to results. .
Program Evaluation Studies TK Logan and David Royse .docxstilliegeorgiana
Program Evaluation
Studies
TK Logan and David Royse
A
variety of programs have been developed to address social problems such
as drug addiction, homelessness, child abuse, domestic violence, illiteracy,
and poverty. The goals of these programs may include directly addressing
the problem origin or moderating the effects of these problems on indi-
viduals, families, and communities. Sometimes programs are developed
to prevent something from happening such as drug use, sexual assault, or crime.
These kinds of problems and programs to help people are often what allracts many
social workers to the profession; we want to be part of the mechanism through which
society provides assistance to those most in need. Despite low wages, bureaucratic red
tape, and routinely uncooperative clients, we tirelessly provide services tha t are invaluable
but also at various Limes may be or become insufficient or inappropriate. But without
conducting eva luation, we do not know whether our programs are helping or hurting,
that is, whether they only postpone the hunt for real solutions or truly construct new
futures for our clients. This chapter provides an overview of program evaluation in gen -
eral and outlines the primary considerations in designing program evaluations.
Evaluation can be done informally or formally. We are constantly, as consumers, infor-
mally evaluating products, services, and in formation. For example, we may choose not to
return to a store or an agency again if we did not evaluate the experience as pleasant.
Similarl y, we may mentally take note of unsolicited comments or anecdotes from clients and
draw conclusions about a program. Anecdotal and informal approaches such as these gen-
erally are not regarded as carrying scientific credibility. One reason is that decision biases
play a role in our "informal" evaluation. Specifically, vivid memories or strongly negative or
positive anecdotes will be overrepresented in our summaries of how things are evaluated.
This is why objective data are necessary to truly understand what is or is not working.
By contrast, formal evaluations systematically examine data from and about programs
and their outcomes so that better decisions can be made about the interventions designed
to address the related social problem. Thus, program evaluation involves the usc of social
research meLhodologies to appraise and improve the ways in which human services, poli-
ci~s, and programs are co nducted. Formal eva l.uation, by its very nature, is applied research.
Formal program evaluations attempt to answer the following general ques tion: Does
the p rogram work? Program evaluation may also address questions such as the following:
Do our clients get better? How does our success rate compare to those of other programs
or agencies? Can the same level of success be obtained through less expensive means?
221
222 PART II • QUANTITATIVE A PPROACHES: TYPES OF STUD IES
What is the expe ...
This document outlines the theory of change approach to planning, monitoring, and evaluating programs. It defines a theory of change as making implicit assumptions about how change happens explicit through articulating the sequence of changes expected from activities. The six steps described are: 1) Understanding how change happens in the context and your role, 2) Drawing on lessons learned, 3) Articulating change pathways between outputs and outcomes, and 4) Identifying assumptions about change. The example shows how outcome mapping revealed additional factors needed for communities to advocate for their rights beyond just advocacy training. Developing change pathways challenges planning by making the logical links between interventions and anticipated changes explicit.
Organizational change can involve changes to an organization's mission, structure, people, culture, knowledge, policies, processes, technology, products, or integration of these elements. There are many types of resistance to change, which can be covert or overt. To overcome resistance, leaders should understand the sources of resistance, identify grassroots supporters to engage others, communicate frequently and credibly about challenges and progress, use diverse communication channels, admit mistakes, and monitor progress.
This document provides guidance for an assignment assessing a healthcare program or policy evaluation. Students are instructed to select an existing program or policy evaluation, review it using an analysis template, and address several questions about the evaluation's assessment of outcomes, impact, data used, and recommendations. The assignment aims to help students apply expertise in program evaluation and consider stakeholder perspectives.
This document discusses and compares different types of accountability in public administration, including traditional, managerial, program, social, and process accountability. It outlines the key elements and focuses of each type. Traditional accountability focuses on compliance, while managerial accountability emphasizes efficiency and economy. Program accountability assesses whether programs achieve objectives effectively and efficiently. Social accountability examines whether activities inspire public confidence and achieve social goals. Process accountability emphasizes agreed-upon procedures and standards between providers and recipients.
This document is a research proposal submitted by Becka Rothstein that examines the effectiveness of the "Character Education" program at the Leadership Training Institute (LTI). The proposal outlines the following:
1) The research problem addresses determining if the "Character Education" program meets the needs of at-risk African American youth aged 13-17 who are frequently repeat offenders.
2) The proposal describes relevant concepts like the objectives and implementation of the "Character Education" program and the status of clients involved.
3) The hypothesis is that if youth in the program change past criminal behaviors, then the program is effective.
4) Data collection methods include observation of program meetings, questionnaires, interviews with clients and staff
Outcomes Based Contracting resource - logic model and Results Based Accountability framework. Taken from the draft Partnering in Procurement Document developed by the Western Australian Council of Social Service and WA Health
More Related Content
Similar to Measuring qualitative changes - an introduction
Pesuading Policy Makers: Effective CIT Program Evaluation and Public Relations citinfo
This document summarizes a presentation on effectively evaluating community intervention team (CIT) programs and persuading policy makers to support CIT. It discusses becoming familiar with the policymaking process, building relationships with policy makers, using data from program evaluations, and involving the community. Program evaluations should assess needs, collect data on outcomes for participants and first responders, and measure changes in costs and the criminal justice system. Choosing a university or professional evaluator can help plan an evaluation to show a program's effectiveness.
The document discusses induction and development in the workplace. It states that the length of time an employee stays in a role is influenced by how long they have already been in the organization. New employees are most at risk of leaving during the first 12 months. Good recruitment and selection aims to stabilize this risk period and help new employees integrate into their roles. A planned induction program provides assistance to help new employees settle in while being productive. Competency-based selection provides clear strengths and weaknesses that make induction planning easier. Measuring individual and organization performance can help evaluate the effectiveness of selection techniques.
This document discusses the need for knowledge and evidence about sustainability and public health interventions to enhance shared agency and collective action. It makes three key points:
1. Knowledge and evidence are not static things but dynamic social processes of generation and mobilization that are entangled with action.
2. Interventions involve complex mixed mechanisms that are biological, social, political, technical, and cultural in nature and how they interact in environments.
3. Research should consider agency at different levels from the start to empower various actors, like researchers, citizens, patients, and policymakers, in different contexts. The "how" of interventions depends on understanding the "who" that is involved.
HFG Health Governance Presentation at 2015 USAID Global Health Mini-UniversityHFG Project
Presentation titled "Governance in the Third Dimension: Science Fiction or Science Fact," given by HFG at 2015 USAID Global Health Mini-University on March 2, 2015.
Session Description: Strengthening health governance can significantly improve the effectiveness and sustainability of reforms and, in turn, achieve better health system performance. Yet despite its importance, health governance investments are often overlooked. Health governance is frequently misunderstood by governments and the global health community, because governance in practice (vs. theory) is poorly defined and difficult to operationalize.
In this session, participants will learn how Haiti has defined and is addressing dimensions of governance for health financing and human resource reforms. Participants will apply these dimensions of health governance to work/activities that they are involved in, and consider how addressing these dimensions can strengthen health governance in their countries and enhance the impact of health financing, human resource, and service delivery reforms.
Campaign planning workshop for the Women's International League for Peace and Freedom (WILPF). Presented at the International Board meeting at Gujurat University, Ahmedabad, India on 5 January 2010
Realist Evaluation : some thoughts about the theoretical foundationsvaléry ridde
This document discusses the theoretical foundations of realist evaluation. It outlines some key concepts in realist evaluation including mechanisms, contexts, and outcomes (CMO) configurations. Realist evaluation is based on critical realism and seeks to understand what works for whom in what circumstances by examining the underlying mechanisms through which programs and interventions bring about change. The document provides examples of CMO configurations and how initial and final middle-range theories can be developed through realist evaluation. It also discusses challenges around defining mechanisms and conceptual layers in realist evaluation.
Running head PSYCHOLOGY1PSYCHOLOGY7Programmatic pur.docxtoltonkendal
Running head: PSYCHOLOGY
1
PSYCHOLOGY
7
Programmatic purposes and outcomes
Shekima Jacob
South University
Programmatic purposes and outcomes
Select and discuss three programmatic purposes and outcomes that should be evaluated. In your discussion, provide the rationale for the purposes and outcomes selected. It will be assumed the purposes and outcomes selected were influenced by the program being evaluated.
The program that I will be discussing is human service programs. In the abiding endeavor to enhance human service programs, service providers, policy makers and funders are more and more recognizing the significance of thorough program evaluations. They want to know what the programs achieve, what they spend, and how they must be operated to attain maximum cost efficiency. They want to recognize which programs function for which groups, and they need endings based on proof, as opposed to impassioned pleas and testimonials. The purposes should state the extensive, extensive range result that maintains the mission of the program, including content information areas, performance prospects, and values anticipated of program graduates. Purposes can be stated in wider and more stirring language than outcomes that have to be measurable and specific. Outcome is the reason nonprofit organizations struggle to build capacity and deliver programs. Measurement of outcomes is the systematic way of assessing the extent to which a program has attained its intended results.
The programmatic purposes and outcomes that should be evaluated include:
Programmatic purposes
· To monitor functions for the Health and Human Services department.
Without departments, the purpose or goals of human services would be very hard to fulfill. Human services is a very large sector that entails a wide range of skills, knowledge and disciplines focused on enhancing the well being of human both collectively and individually. Just like there are a lot of sectors in human services, so too there are a huge variety of functions of the human service programs that need to be evaluated so as to accomplish the purpose of the program (Connell, Kubisch, Schorr & Weiss, 1995). One of the programmatic purposes of human service programs is to monitor functions for the Health and Human Services department. Any department or even sector requires frequent checks to make sure that it is functioning well and according to the purpose. This purpose is very crucial in the execution of the human service program goals. It needs to be evaluated to make sure that the functions of the health and human service department are in line with the programmatic purposes of the program.
· Assessing internal control over compliance requirements to provide reasonable assurance.
The compliance requirements are very crucial in every program as they make sure that the program is in line with its goals and makes sure it works towards achieving its stipulated outcomes. This purpose needs to be evaluated to m ...
This document discusses the promise of building individual and organizational capacity through frameworks focused on executive functioning, resilience, and leadership development. It argues these frameworks hold potential for setting and achieving life goals if integrated in a holistic approach at the individual, community, agency, and partnership levels. While each framework has experts exploring its application, efforts are lacking to connect them and demonstrate how they could comprise a more powerful approach to development across populations and challenges.
SOCW 6311 wk 8 peer responses Respond to at least two collea.docxsamuel699872
SOCW 6311 wk 8 peer responses
Respond to at least two colleagues by doing all of the following:
Name first and references after every person
Indicate strengths of their needs assessment plan that will enable the needs assessments to yield support for the program that they want to develop.
Offer suggestions to improve the needs assessment plan in areas such as:
Defining the extent and scope of the need
Obtaining important information about the target population
Identifying issues that might affect the target population’s ability to access the program or services
Instructor wants lay out like this:
Respond to at least two colleagues ( 2 peers posts are provided) by doing all of the following:
Identify strengths of your colleagues’ analyses and areas in which the analyses could be improved.
Your response
Address his or her evaluation of the efficacy and applicability of the evidence-based practice,
Your response
[Evaluate] his or her identification of factors that could support or hinder the implementation of the evidence-based practice,
Your response
And [evaluate] his or her solution for mitigating those factors.
Your response
Offer additional insight to your colleagues by either identifying additional factors that may support or limit implementation of the evidence-based practice or an alternative solution for mitigating one of the limitations that your colleagues identified.
Your response
References
Your response
Peer 1: McKenna Bull
RE: Discussion - Week 8
COLLAPSE
Top of Form
Needs assessments are a form of research conducted to gather information about the needs of a population or a group in a community (Tutty & Rothery, 2010, p. 149). One purpose of a needs assessment is to explore in more depth whether a new program within an organization or agency is needed (Dudley, 2014, p. 117). Key questions of this type of needs assessment may revolve around: (1) whether there are enough prospective clients to warrant this type of program, (2) the different activities or programs that the respondents would be interested in using, priorities for some activities over others, (3) importance of the activities, and (4) times in which this program would be desired and used (Dudley, 2014, p. 117). Potential barriers for the implementation of a new program should also be assessed to ensure the best possible outcome. Some barriers to services could include factors such as: location, costs, potential need for fees, and possible psychological issues related to such things. The following is an assessment of an intensive outpatient program for youth, and a potential need that is currently being unmet.
Post a needs assessment plan for a potential program of your choice that meets a currently unmet need. Describe the unmet need and how current information supports your position that a needs assessment is warranted.
The intensive outpatient program (IOP) at Provo Canyon Behavioral H.
20200507_010443.jpg
20200507_010448.jpg
20200507_010502.jpg
20200507_010507.jpg
20200507_010513.jpg
20200507_010517.jpg
20200507_010522.jpg
20200507_010527.jpg
CHAPTER
1
Research in Public and Nonprofit Programs
The Basics
What do you think of when you hear the word research? Do you imagine a scientist working in a lab, an economist surrounded by computers and graphs, or a scholar surrounded by a pile of books? You are right; each is conducting research. But research goes well beyond the work of a scientist, an economist, or a scholar. Research may describe what you do when you decide what computer to buy, what jobs to apply for, or what music to download. It also describes what effective managers and influential stakeholders do when they monitor a program’s performance, determine its effectiveness, and assess the experiences and opinions of its clients. With information gathered from research, managers monitor their programs, publicize their successes, and identify opportunities for program improvement. Stakeholders may use the information to decide what programs to support and what policies to advocate for. Just as you sometimes make decisions, organizations make some decisions quickly with little information. Sometimes, however, decisions are made after gathering as much information as possible and organizing it. This latter approach to research is the subject of this text. We define research as the systematic gathering and analysis of empirical information to answer a question.
RESEARCH AND EFFECTIVE MANAGEMENT
Research and effective organizations go hand in hand. They both depend on openness, accountability, and a commitment to learning and change. Openness or transparency is a key organizational and research value, which adds to an organization’s reputation. Stakeholders want to know how an organization uses its resources. They want information on what the organization does and what it has achieved. Information on a program’s performance, its effectiveness, and how it is perceived all contribute to transparency. You, your staff, or a contractor may conduct research to gather information. Whoever conducts the research is expected to disclose what information they gathered, whom they gathered it from, and how they analyzed it. With full disclosure, management teams have the information they need to decide if and how they can act on the research findings.
Accountability may be thought of as an attribute of transparency. An accountable organization is open and produces evidence that it is meeting stakeholder expectations. Donors, funders, and clients may all want confirmation that a program is well managed and achieving results. You may collect data on a program’s clients, activities, services, and results. Information on clients and activities documents that the program reaches the target population and provides appropriate services. You can link the information on clients and activities to data that can be linked to results. .
20200507_010443.jpg
20200507_010448.jpg
20200507_010502.jpg
20200507_010507.jpg
20200507_010513.jpg
20200507_010517.jpg
20200507_010522.jpg
20200507_010527.jpg
CHAPTER
1
Research in Public and Nonprofit Programs
The Basics
What do you think of when you hear the word research? Do you imagine a scientist working in a lab, an economist surrounded by computers and graphs, or a scholar surrounded by a pile of books? You are right; each is conducting research. But research goes well beyond the work of a scientist, an economist, or a scholar. Research may describe what you do when you decide what computer to buy, what jobs to apply for, or what music to download. It also describes what effective managers and influential stakeholders do when they monitor a program’s performance, determine its effectiveness, and assess the experiences and opinions of its clients. With information gathered from research, managers monitor their programs, publicize their successes, and identify opportunities for program improvement. Stakeholders may use the information to decide what programs to support and what policies to advocate for. Just as you sometimes make decisions, organizations make some decisions quickly with little information. Sometimes, however, decisions are made after gathering as much information as possible and organizing it. This latter approach to research is the subject of this text. We define research as the systematic gathering and analysis of empirical information to answer a question.
RESEARCH AND EFFECTIVE MANAGEMENT
Research and effective organizations go hand in hand. They both depend on openness, accountability, and a commitment to learning and change. Openness or transparency is a key organizational and research value, which adds to an organization’s reputation. Stakeholders want to know how an organization uses its resources. They want information on what the organization does and what it has achieved. Information on a program’s performance, its effectiveness, and how it is perceived all contribute to transparency. You, your staff, or a contractor may conduct research to gather information. Whoever conducts the research is expected to disclose what information they gathered, whom they gathered it from, and how they analyzed it. With full disclosure, management teams have the information they need to decide if and how they can act on the research findings.
Accountability may be thought of as an attribute of transparency. An accountable organization is open and produces evidence that it is meeting stakeholder expectations. Donors, funders, and clients may all want confirmation that a program is well managed and achieving results. You may collect data on a program’s clients, activities, services, and results. Information on clients and activities documents that the program reaches the target population and provides appropriate services. You can link the information on clients and activities to data that can be linked to results. .
Program Evaluation Studies TK Logan and David Royse .docxstilliegeorgiana
Program Evaluation
Studies
TK Logan and David Royse
A
variety of programs have been developed to address social problems such
as drug addiction, homelessness, child abuse, domestic violence, illiteracy,
and poverty. The goals of these programs may include directly addressing
the problem origin or moderating the effects of these problems on indi-
viduals, families, and communities. Sometimes programs are developed
to prevent something from happening such as drug use, sexual assault, or crime.
These kinds of problems and programs to help people are often what allracts many
social workers to the profession; we want to be part of the mechanism through which
society provides assistance to those most in need. Despite low wages, bureaucratic red
tape, and routinely uncooperative clients, we tirelessly provide services tha t are invaluable
but also at various Limes may be or become insufficient or inappropriate. But without
conducting eva luation, we do not know whether our programs are helping or hurting,
that is, whether they only postpone the hunt for real solutions or truly construct new
futures for our clients. This chapter provides an overview of program evaluation in gen -
eral and outlines the primary considerations in designing program evaluations.
Evaluation can be done informally or formally. We are constantly, as consumers, infor-
mally evaluating products, services, and in formation. For example, we may choose not to
return to a store or an agency again if we did not evaluate the experience as pleasant.
Similarl y, we may mentally take note of unsolicited comments or anecdotes from clients and
draw conclusions about a program. Anecdotal and informal approaches such as these gen-
erally are not regarded as carrying scientific credibility. One reason is that decision biases
play a role in our "informal" evaluation. Specifically, vivid memories or strongly negative or
positive anecdotes will be overrepresented in our summaries of how things are evaluated.
This is why objective data are necessary to truly understand what is or is not working.
By contrast, formal evaluations systematically examine data from and about programs
and their outcomes so that better decisions can be made about the interventions designed
to address the related social problem. Thus, program evaluation involves the usc of social
research meLhodologies to appraise and improve the ways in which human services, poli-
ci~s, and programs are co nducted. Formal eva l.uation, by its very nature, is applied research.
Formal program evaluations attempt to answer the following general ques tion: Does
the p rogram work? Program evaluation may also address questions such as the following:
Do our clients get better? How does our success rate compare to those of other programs
or agencies? Can the same level of success be obtained through less expensive means?
221
222 PART II • QUANTITATIVE A PPROACHES: TYPES OF STUD IES
What is the expe ...
This document outlines the theory of change approach to planning, monitoring, and evaluating programs. It defines a theory of change as making implicit assumptions about how change happens explicit through articulating the sequence of changes expected from activities. The six steps described are: 1) Understanding how change happens in the context and your role, 2) Drawing on lessons learned, 3) Articulating change pathways between outputs and outcomes, and 4) Identifying assumptions about change. The example shows how outcome mapping revealed additional factors needed for communities to advocate for their rights beyond just advocacy training. Developing change pathways challenges planning by making the logical links between interventions and anticipated changes explicit.
Organizational change can involve changes to an organization's mission, structure, people, culture, knowledge, policies, processes, technology, products, or integration of these elements. There are many types of resistance to change, which can be covert or overt. To overcome resistance, leaders should understand the sources of resistance, identify grassroots supporters to engage others, communicate frequently and credibly about challenges and progress, use diverse communication channels, admit mistakes, and monitor progress.
This document provides guidance for an assignment assessing a healthcare program or policy evaluation. Students are instructed to select an existing program or policy evaluation, review it using an analysis template, and address several questions about the evaluation's assessment of outcomes, impact, data used, and recommendations. The assignment aims to help students apply expertise in program evaluation and consider stakeholder perspectives.
This document discusses and compares different types of accountability in public administration, including traditional, managerial, program, social, and process accountability. It outlines the key elements and focuses of each type. Traditional accountability focuses on compliance, while managerial accountability emphasizes efficiency and economy. Program accountability assesses whether programs achieve objectives effectively and efficiently. Social accountability examines whether activities inspire public confidence and achieve social goals. Process accountability emphasizes agreed-upon procedures and standards between providers and recipients.
This document is a research proposal submitted by Becka Rothstein that examines the effectiveness of the "Character Education" program at the Leadership Training Institute (LTI). The proposal outlines the following:
1) The research problem addresses determining if the "Character Education" program meets the needs of at-risk African American youth aged 13-17 who are frequently repeat offenders.
2) The proposal describes relevant concepts like the objectives and implementation of the "Character Education" program and the status of clients involved.
3) The hypothesis is that if youth in the program change past criminal behaviors, then the program is effective.
4) Data collection methods include observation of program meetings, questionnaires, interviews with clients and staff
Outcomes Based Contracting resource - logic model and Results Based Accountability framework. Taken from the draft Partnering in Procurement Document developed by the Western Australian Council of Social Service and WA Health
Similar to Measuring qualitative changes - an introduction (20)
1. How to assess qualitative results at
rights’ holders and institutional level
An introduction to scales for performance measurements
ABSTRACT
How ‘good’ is good when we strive to promote women’s
empowerment, to strengthen resilience to natural or man-made
disasters, or to promote food security? When is ‘good’ good
enough? This document introduces scales to measure the
qualitative results of your work.
Strategihuset, Malene Sønderskov
2. 1. The challenge
How ‘good’ is good when we strive to promote women’s
empowerment, to strengthen resilience or to promote
food security among poor and marginalised population
groups? And when is ‘good’ good enough when we
promote sustainable changes in people’s lives or in an
organisation’s capacity to fight for the rights of the poor?
Despite all their good intentions, a majority of NGOs do
not have the benefit of the correct information and/or
tools to be able to determine the quality and scope of
their results. They find it hard to course-correct when
they are off track, and few have a reliable way of knowing
whether they’re doing meaningful, and measurable, good
for those they serve.
The reasons for this are manifold. One is that assessing
and documenting the qualitative impact that social
change interventions have, in people’s lives or the lives of
organisations can be a challenge, and what’s more they
can be difficult to account for. Challenges include issues
such as:
How do we understand and verify our contribution
to poverty reduction in a complex environment with
multiple actors?
How can we validate that the short and intermediate
outcomes, which we are able to achieve in a three-
five year program period, will eventually lead to
longer-term impacts in terms of resilience,
government accountability and access to rights?
How do we aggregate data from a variety of
interventions, in various geographic and thematic
areas of work, without smothering ourselves, and
the people we work, with in endless, cumbersome
and bureaucratic data collection procedures?
And last – but not least; can data collection be
organised in a spirit of curiosity, learning and
reflection, rather than in a culture of ‘command and
control’?
2. The most common responses and their
shortcomings
In an effort to account our results, many of us resort to
counting activities or the amount of people we have
reached.
Another commonly used approach is to map the planned
and unplanned outcomes of our work, or to collect
change stories and testimonies from beneficiaries of our
work.
Counting activities, output and rights’ holders can be very
helpful, in terms of documenting that we are following
our work plan and our contract as expected.
Mapping outcomes and collecting testimonies may help
us understand the outcomes we achieve, as well as the
“how and why”, but these methods give limited support
to a more detailed and exact account of the same
outcomes. These methods are also not particularly
suitable for assessing the quality of our outcomes, which
are often the preconditions to implementing long-term
goals for positive and sustainable change in people’s lives.
Outcomes such as ‘resilience’, ‘agency’ or
‘empowerment’, which are often at the heart of what we
do, are hard to measure by counting. And while
testimonies and change stories often capture the
qualitative progress in informants’ feelings, skills,
behaviours or relationships effectively, it can be hard to
argue that changes, traced in testimonies, are
representative of anyone other than the informant
him/herself.
For these reasons, the key challenge in measuring
outcomes and the results of our work remains; how can
we account, in a more systematic way, for the qualitative
changes that are at the heart of so much of what we do,
in order to create sustainable improvements in people’s
lives?
3. Between counting and testimonies-
what can we do?
Adopting a scale-based assessment methodology can
help us to overcome some of the aforementioned
challenges, as it allows us to monitor what are usually
considered qualitative outcomes, among a broader group
of people, systematically.
A scale is a tool that can help provide an evaluative
description of what ‘performance’ or ‘quality’ looks like,
in qualitative areas such as ‘empowerment’, ‘agency’ or
3. ‘resilience‘. It can help us assess the value of an
intervention to the people or organisations that a
program, or project, is seeking to benefit.
It is important to note that using scales does not solve the
problem of verifying how our interventions contribute to
wider societal changes, such as poverty reduction or the
promotion of democratic societies at a wider level. In
order to do this we would need a more thorough
contribution analysis. However, scales can help us
validate that the short and intermediate outcomes, which
we are able to achieve in a three-five year program
period, will eventually lead to longer-term impacts in
terms of resilience, government accountability and access
to rights.
Scales can also provide a useful contribution to the
aggregation of data from a variety of interventions, in
various geographic and thematic areas of work, if the
same standard model to a scaleis applied.
Rating Explanation
Excellent Clear examples(s) of exemplary
performance exist(s). No weaknesses.
Independent of the intervention.
Very good Very good performance in all aspects.
Strong overall; a need remains for
occasional support.
Good Reasonably good performance overall.
Regular support still required.
Inadequate Fair performance. Some serious
weaknesses on key aspects. Active
only with support.
Poor Clear evidence of unsatisfactory
function, Serious weaknesses across
the board on crucial aspects.
Table1: Example of a standard scale that can be used to
categorise performance in specific areas of an intervention
4. Scales versus indicators – what is the
difference?
Finally, many of us struggle to identify indicators that
capture the complexity or our work in one single indicator
of ‘sign of proof’, which will confirm that our intervention
has contributed to the desired result. Scales have the
benefit of “adding flesh” to these indicators to make then
more visible and relevant to the reality of the world in
which we operate.
Although scales may be broader in their description of a
situation than indicators, they may help us to describe the
‘fuller picture’ of a situation, at a given performance level
that relates to qualitative outcomes of our work. They
also allow us to embrace some of the complexity that is
often associated with social change.
The difference between indicators and scales is
illustrated, graphically, below:
Empowerment,
resilience,
institutional
capacity etc.
Empowerment,
resilience,
institutional
capacity etc.
1. What we want to measure
3. What scales can add: A wider
coverage of the situation than single
indicators
2. What we usually do: Find a few
indicators (some might be
more ‘spot on’ than others)
4. 5. How are scales developed and used for
monitoring and learning?
Step 1. Formulating a scale
Scales consist of ‘ladders’ or ‘phases’ with measurable
steps where participants, rights’ holders and/or targets
for an intervention can move upwards towards a desired
level. This helps to determine the degree to which
interventions have attained their results and can show if
the intervention has reached a desirable level, of
empowerment or ‘organisational capacity’, for example.
Ideally scales should be developed in a participatory
manner, in order to ensure that the voices and values of
all stakeholders were taken into account when the scale
was developed.
A participatory approach helps project planners and
implementers alike to create a deeper and cohesive
understanding of what is really important in their
intervention, and what ‘performance’ looks like in
practical terms in various situations; from the very poor
to the most excellent.
A participatory approach will strengthen ownership and
an understanding of the benefits of using scales. It can
also prevent staff members from perceiving scales as a
tool for ‘command and control’, but rather as part of a
reciprocal process of reflection and learning for the entire
organisation.
The easiest way to develop a scale is to start by
formulating the ‘extremes’. That is, what the situation
looks like in an ideal situation and what it looks like in the
worst situation. On a five-phase scale as the one in table
Phase V – Women are
proactive and act
independently of the
intervention
Phase IV – Women
are becoming
proactive, but still
need support
Women, who have
claimed their right to
inherit, speak openly
about it and say why
claiming rights will
benefit the entire
family.
Women mobilise,
encourage and
support other
women to claim their
right to inherit.
Phase III – Women are
active, and support the
intervention
Women seek judicial
support in specific
cases of inheritance.
Women mobilise
support from female
and male relatives
for their right to
inherit.
Phase II – women are
aware and participate
in the intervention
Women have an
approximate overview
of what they would
inherit, if they claim
their right to do so, and
whether and/or when
this would be relevant.
Women take steps to
ensure that any future
inheritance claimed,
remains in their control
(and is not controlled
by husbands and/or
other relatives).
Phase 1 – women are
passive and unaware
Women know they
have rights, but
remain silent – or
question their rights
among themselves.
Women know the
procedures of
inheritance and
where they can seek
support.
Women are not
aware that they have
a right to decide
about their own
bodies, resources or
inheritance.
Table 2: A scale measuring women’s empowerment in
the field of claiming their right to inherit.
It is possible to develop scales in areas as diverse as food
security, resilience and community activism etc.
5. two this would equate to Phases Five and One
respectively.
Once the situation in these two phases has been
described, it is easier to formulate how the situation could
look in-between in Phases two, three and four.
Step 2. Creating a baseline - Categorising
targets for the intervention
Using scales to measure outcomes and progress is not
difficult. In fact the hardest part is identifying the optimal
outcome sequence/identifying the content of the
outcome phases, for the people that we target in our
intervention, as illustrated in table II.
Once this conceptual work has been completed, the
means for measuring short-term and intermediate
outcomes will become clear.
In order to do this, program implementers should place
the targets or rights’ holders for the intervention, under
the phase or phases in the scale that best match their
characteristics.
To determine the best match, various data collection
methodologies can be used, including:
Observing the behaviours and attitudes of targets
Conducting semi-structured or focused group
interviews
Asking targets to complete a questionnaire or test
Asking targets to participate in a self-assessment and
to indicate which of the categories they think best
match their feelings, attitudes and practices.
It is important that more than one data collection
methodology – and ideally three – is used for the
categorisation. This will strengthen its accuracy, although
the assessment will always remain approximate as none
of the categories are likely to match any one person to a
hundred percent.
The location of target groups, under the relevant phase or
phases at the onset of the intervention, is useful both as
an input to creating a performance baseline as well as for
assessing the progress and performance of our
interventions during the implementation. This location is
decided by the project implementers and planners, as
they are the ones who will identify which of the categories
in their scale best reflect the description of their rights’
holders, or the institutions that they work with.
When the first measurements are made, it is likely that
rights’ holders/targets will be best characterised by the
descriptions under Phases one or two, especially if the
intervention concerns a group, that has not been targeted
before. The outcome of the categorisation may be more
diversified, if the group consists of ‘old’ as well as new
rights’ holders or targets.
Step 3. Measuring progress
As mentioned earlier, categorising rights’ holders or
targets in your scale, at the onset of your intervention,
will build the intervention’s performance baseline.
(However, this can also be constructed retrospectively, if
no baseline was made at the onset of the intervention.
For instance you can do this by interviewing targets about
their attitudes and feelings at the onset of the project.)
Depending on the nature of your intervention and how
quickly changes are likely to happen, you may repeat the
exercise described in Step two either on an annual or
semi-annual basis in order to trace its progress.
Rights’ holders/targets are expected to belong to the
‘higher’ categories in later measurements, provided that
the intervention is effective. Rights’ holders who reach
Phases four or five are expected to ‘graduate’ from the
intervention, so that new beneficiaries can be enrolled.
Table 3: Using a scale for measuring.
6. Apples or pears? Aggregation of data
in diversified programs
Many interventions, such as country or thematic
programs, are challenged by the fact that they support a
variety of interventions under the same ‘heading’ or
outcomes.
You can quickly get the feeling that you are trying to
compare apples with pears or bananas, if you try to
aggregate data from these interventions, so as to answer
the broader questions about a program’s overall progress
Phase 1 Phase 2 Phase 3 Phase 4 Phase 5
Rights’ holder
categorisation, later
measurement
Rights’ holder
categorisation, first
measurement
6. or contribution to outcomes, such as community
resilience, food security of accountability.
Scales can help alleviate this problem as they allow the
country or thematic program to answer broad questions
about the program’s overall progress towards outcomes.
At the same time they leave room for a reflection of the
individual characteristics of each intervention under the
program.
This is done by ensuring that all similar interventions (e.g.
empowerment or resilience interventions) under the
program outcome use the same scale headings (e.g. a
scale from Phase 1 to 5), and allow the intervention to
interpret how the situation would look under each of the
five headings. The two tables below provide examples of
how that could be done.
Phases: Fixed
descriptions
Phase I – Targets
are passive and
unaware
Phase II: Targets
are aware and
start questioning
their situation
Phase II: Targets
take steps to
change their
situation with the
support of the
intervention
Phase IV: Targets
are being
proactive but still
need regular
support
Phase V: Targets
are becoming
independent of
support
Description
tailored to
each
individual
intervention
Community
members are
unaware of rights
and where to claim
them.
Community
members aware of
their rights and
opportunities to
seek local
development
funds.
Communities plan
a joint project
together and file a
claim with local
authorities –
closely assisted by
the intervention
Communities
negotiate with
local authorities
about their
project. The
Intervention
provides advice
regularly.
Communities
implement
projects. Plan new
initiativess and
campaign with
authorities –
without support.
Table 4: Example 1: What is measured: Community capacity for claiming rights and access to resources:
Intervention 1: Claiming local development funds
Phases: Fixed
description
(Repetition
from table
above)
Phase I – Targets are
passive and unaware
Phase II: Targets
are aware and
start questioning
their situation
Phase II: Targets
take steps to
change their
situation with the
support of the
intervention
Phase IV: Targets
are being
proactive but still
need regular
support
Phase V: Targets
are becoming
independent of
support
Description
tailored to
each
individual
intervention
Community members
are unaware of rights
and budgets in their
municipality.
Community
members are able
to read local
budgets and
expenditures.
Community
members have
skills to question
budget
expenditures.
Individual
community
members file
questions related
to the budget at
local council
meetings.
Questioning
supported by the
intervention.
Community
members
organise
themselves into a
budget
monitoring
committee.
Intervention
supports the
formulation of
bylaws for
committee.
Monitoring
committee meet
every month,
without support.
Network to local
decision makers
established.
Table 5: What is measured: Community' capacity to claim rights and to access community resources:
Intervention 2: Claiming transparency of local budgets and expenditures
7. Literature for inspiration:
E. Jane Davidson, Evaluation Basics
David Hunter, Working Hard and Working Well
– A practical Guide to Performance
Management
Mario Morino: Leap of reason. Managing to
outcomes in an era of scarcity
John Mayne: Contribution analysis: An
approach to exploring cause and effect
Joy MacKeith, The Development of the
Outcomes Star: A Participatory Approach to
Assessment and Outcome Measurement