Instruments for improvement of Accountability and Governance in NGOsHumaneasy Consulting
Marilyn Wyatt (Consultant, Prague)
Series of Internacional Conferences
Civil Society Organizations
Transparency and Responsibility
2nd Conference "Ethics, Transparency and Responsability"
Held at the Goeth Institut Lissabon
Organized by Humaneasy Consulting and Friedrich Ebert Stiftung Portugal
More at http://www.humaneasy.com/conf/
Instruments for improvement of Accountability and Governance in NGOsHumaneasy Consulting
Marilyn Wyatt (Consultant, Prague)
Series of Internacional Conferences
Civil Society Organizations
Transparency and Responsibility
2nd Conference "Ethics, Transparency and Responsability"
Held at the Goeth Institut Lissabon
Organized by Humaneasy Consulting and Friedrich Ebert Stiftung Portugal
More at http://www.humaneasy.com/conf/
Capacity Building Community Partnerships and OutcomesBonner Foundation
This session will frame our focus on community capacity building and impact, introducing the high-impact community engagement practices and a set of community change outcomes. Teams will explore the intended capacity building and change outcomes that should guide their projects.
Resourceful mobilizing for resource mobilizationIFPRI-PIM
This presentation was given by Frank Place (IFPRI), as part of the Capacity Development Workshop hosted by the CGIAR Collaborative Platform for Gender Research. The event took place on 7-8 December 2017 in Amsterdam, the Netherlands, where the Platform is hosted (by KIT Royal Tropical Institute).
Read more: http://gender.cgiar.org/gender_events/annual-scientific-conference-capacity-development-workshop-cgiar-collaborative-platform-gender-research/
Resource Mobilization is a management approach that enables Organizations, its Leaders, Stakeholders and People to develop sustainable relationships and continuous support from its partners. The Resource Mobilization and Proposal Writing Workshop framework provides a ladderized approach to establishing a common knowledge on the Subject Area and start awareness for needed skills in people management and project development.
Though this approach still proves to be very useful and outcomes-based, it is still important that participants develop a transformative understanding of the nature of Resource Mobilization, its importance and continuing mindset to promote and nurture relationships among their People, Organizations and Providers. This transformative trend will allow the trainees to exhibit a sense of realization and apply such principles as they handle their daily operations.
Despite how others may define it, Resource Mobilization is still a continuing process that involves identifying people and building relationships with those who share the same values, insights, and advocacies of Organizations. It should be a mindset, built to establish goodwill among its members and parties, who view Resources beyond just raising funds. Its values thrive on building and managing relationships and nurturing them for maximum advantages.
Capacity Building Community Partnerships and OutcomesBonner Foundation
This session will frame our focus on community capacity building and impact, introducing the high-impact community engagement practices and a set of community change outcomes. Teams will explore the intended capacity building and change outcomes that should guide their projects.
Resourceful mobilizing for resource mobilizationIFPRI-PIM
This presentation was given by Frank Place (IFPRI), as part of the Capacity Development Workshop hosted by the CGIAR Collaborative Platform for Gender Research. The event took place on 7-8 December 2017 in Amsterdam, the Netherlands, where the Platform is hosted (by KIT Royal Tropical Institute).
Read more: http://gender.cgiar.org/gender_events/annual-scientific-conference-capacity-development-workshop-cgiar-collaborative-platform-gender-research/
Resource Mobilization is a management approach that enables Organizations, its Leaders, Stakeholders and People to develop sustainable relationships and continuous support from its partners. The Resource Mobilization and Proposal Writing Workshop framework provides a ladderized approach to establishing a common knowledge on the Subject Area and start awareness for needed skills in people management and project development.
Though this approach still proves to be very useful and outcomes-based, it is still important that participants develop a transformative understanding of the nature of Resource Mobilization, its importance and continuing mindset to promote and nurture relationships among their People, Organizations and Providers. This transformative trend will allow the trainees to exhibit a sense of realization and apply such principles as they handle their daily operations.
Despite how others may define it, Resource Mobilization is still a continuing process that involves identifying people and building relationships with those who share the same values, insights, and advocacies of Organizations. It should be a mindset, built to establish goodwill among its members and parties, who view Resources beyond just raising funds. Its values thrive on building and managing relationships and nurturing them for maximum advantages.
Organizational Capacity-Building Series - Session 6: Program EvaluationINGENAES
This session describes different kinds of program evaluations, and key evaluation considerations. These presentations are are part of a workshop series that was implemented in Nepal and 2016 as part of the INGENAES initiative.
David Fleming held a seminar on monitoring and evaluation in conflict-affected environments at the Post-war Reconstruction and Development Unit (PRDU), University of York.
Measuring Impact - An Engage Workshop on Monitoring & Evaluation Participation Works
Two heads are better than one, and 30 people from 14 different organisations sharing expertise and ideas made the Measuring Impact workshop a truly enlightening event on February 23, 2012.
The Measuring Impact workshop was the first Engage event and was held in Nattional Children’s Bureau (NCB) offices in Belfast.
Find out more:
http://www.participationworks.org.uk/news/engage-workshop-measuring-impact
Reflections from a realist evaluation in progress: Scaling ladders and stitch...Debbie_at_IDS
In this session, Isabel Vogel, Melanie Punton and Rob Lloyd will reflect on the first year of a three-year realist impact evaluation, examining the Building Capacity to Use Research Evidence (BCURE) programme funded by the UK Department for International Development.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
This session provides a comprehensive overview of the latest updates to the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (commonly known as the Uniform Guidance) outlined in the 2 CFR 200.
With a focus on the 2024 revisions issued by the Office of Management and Budget (OMB), participants will gain insight into the key changes affecting federal grant recipients. The session will delve into critical regulatory updates, providing attendees with the knowledge and tools necessary to navigate and comply with the evolving landscape of federal grant management.
Learning Objectives:
- Understand the rationale behind the 2024 updates to the Uniform Guidance outlined in 2 CFR 200, and their implications for federal grant recipients.
- Identify the key changes and revisions introduced by the Office of Management and Budget (OMB) in the 2024 edition of 2 CFR 200.
- Gain proficiency in applying the updated regulations to ensure compliance with federal grant requirements and avoid potential audit findings.
- Develop strategies for effectively implementing the new guidelines within the grant management processes of their respective organizations, fostering efficiency and accountability in federal grant administration.
Understanding the Challenges of Street ChildrenSERUDS INDIA
By raising awareness, providing support, advocating for change, and offering assistance to children in need, individuals can play a crucial role in improving the lives of street children and helping them realize their full potential
Donate Us
https://serudsindia.org/how-individuals-can-support-street-children-in-india/
#donatefororphan, #donateforhomelesschildren, #childeducation, #ngochildeducation, #donateforeducation, #donationforchildeducation, #sponsorforpoorchild, #sponsororphanage #sponsororphanchild, #donation, #education, #charity, #educationforchild, #seruds, #kurnool, #joyhome
ZGB - The Role of Generative AI in Government transformation.pdfSaeed Al Dhaheri
This keynote was presented during the the 7th edition of the UAE Hackathon 2024. It highlights the role of AI and Generative AI in addressing government transformation to achieve zero government bureaucracy
Many ways to support street children.pptxSERUDS INDIA
By raising awareness, providing support, advocating for change, and offering assistance to children in need, individuals can play a crucial role in improving the lives of street children and helping them realize their full potential
Donate Us
https://serudsindia.org/how-individuals-can-support-street-children-in-india/
#donatefororphan, #donateforhomelesschildren, #childeducation, #ngochildeducation, #donateforeducation, #donationforchildeducation, #sponsorforpoorchild, #sponsororphanage #sponsororphanchild, #donation, #education, #charity, #educationforchild, #seruds, #kurnool, #joyhome
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Presentation by Jared Jageler, David Adler, Noelia Duchovny, and Evan Herrnstadt, analysts in CBO’s Microeconomic Studies and Health Analysis Divisions, at the Association of Environmental and Resource Economists Summer Conference.
Evaluability Assessments and Choice of Evaluation Methods
1. Evaluability Assessments and Choice of
Evaluation Methods
Richard Longhurst, IDS
Discussant: Sarah Mistry, BOND
Centre for Development Impact
Seminar
19th February 2015
2. Introduction and some health warnings
• Some acknowledgements and thanks
• How this work came about: multilateral agency experience as
well as some review of literature
• Evaluability assessments (EAs) are not new, go back 25 years
• Will try to avoid getting bound up in the technical aspects ….
some of this will seem common sense …..but what matters is
trying to make explicit the basis on which decisions are
made… and how they relate to the culture of the organisation
• It is important to make judgements about choice of evaluation
methods (as this is a CDI event) and what drives choices. The
EA literature beginning to enter debate of choice of methods
• In the scope of this seminar, will not be covering every
evaluation method
3. Context of this work with International Programme
for the Elimination of Child labour (ILO-IPEC)
• Large technical cooperation programme (since 1992) largely funded
by US Dept. of Labor
• Causes of child labour are multi-faceted, approaches to eliminate
are equally various
• Main programme tool is Programme of Support to the national
Time Bound Programme to reduce the worst forms of child labour
• TBP involved ‘upstream’ enabling environment and ‘downstream’
action support to reduction of child labour, therefore mix of
interventions
• Also project and global interventions: at its peak IPEC carrying out
25 evaluations per year
• See: Perrin and Wichmand (2011) Evaluating Complex Strategic Interventions: The
Challenge of Child Labour in Forss, Marra and Schwartz (eds), Transaction Publ.
4. Context: IPEC Evaluation approaches and
Information Sources
• National Household Surveys
• Baseline Surveys
• Rapid Assessment Surveys
• Child Labour Monitoring Systems and programme monitoring
• Tracking and Tracer studies
• One on one interviews; Focus groups
• Document Analysis, Observation, Case studies
• Impact and outcome evaluations, expanded final evaluations
• Success case method and most significant change
• Use of SPIF: strategic planning and impact framework
5. Context: My baseline at Commonwealth
Secretariat (1995-2002)
• Starting up an expanded evaluation function
• Conservative, diplomatic based organisation
• An organisation with many small (<£50K) projects
• About 4-5 project evaluations plus one strategic review of the
political function
• Evaluation worked with planning function and reported direct
to CEO with oversight from GB
• Many projects were hard to evaluate because of their design
• Evaluability regarded as achieved through adherence to the 2
year strategic plan
6. Current Use of EAs
• Use of EAs is growing:
• After their popularity in the US in the 1980s, EA guidance has
been developed by ILO, CDA, IDRC, EBRD and UNODC, with
recently DFID, AusAID, UNFPA, WFP, IADB, UNIFEM and HELP
(a German NGO).
• Encouraged by the International Financial Institutions (IFIs)
• Over half of EAs were for individual projects (balance were
country strategies, strategic plans, work plans and
partnerships)
7. Some definitions of EA from multilaterals
• OECD-DAC: ‘the feasibility of an evaluation is assessed … it
should be determined whether or not the development
intervention is adequately defined and its results verifiable,
and if evaluation is the best way to answer questions posed by
policy makers or stakeholders’. (broad)
• Evaluation Cooperation Group of the IFIs: ‘The extent to which
the value generated or the expected results of a project are
verifiable in a reliable and credible fashion’ (narrow but
useful)
• World Bank: ‘A brief preliminary study undertaken to
determine whether an evaluation would be useful and feasible
…. It may also define the purpose of the evaluation and
methods for conducting it’. (says something about methods)
8. Process for EAs (i)
• Common steps include (Davies):
– Identification of project boundaries
– Identification of resources available for EA
– Review of documentation
– Engage with stakeholders, then feedback findings
– Recommendations to cover: project logic and design, M&E
systems, evaluation questions of concern to stakeholders
and possible evaluation designs.
9. Process for EAs (ii) – Incorporating approaches
for methods
• Mapping an analysis of existing information
• Developing the theory of change to identify evaluation
questions noting linkages to changes attributable to
intervention
• Setting out priorities, key assumptions and time frames
• Choosing appropriate methods and tools
• Ensuring resources are available for implementation
• Outline reporting and communicating results of evaluation
10. Issues for an EA
• Review of guidance documents of international
agencies suggest EAs should address three broad
issues:
– Programme design
– Availability of information
– Institutional context (including breadth of stakeholders)
11. EA Tools (i)
• Checklists are normally used: ILO covers five main areas:
– Internal logic and assumptions
– Quality of indicators, Baselines, Targets and Milestones
– Means of verification, measurement and methodologies
– Human and Financial resources, and
– Partners’ Participation and use of information
(and ILO uses a rating system for this).
Don’t knock checklists, there is always a theory of change
embodied in them
An independent consultant is usually employed
12. EA tools (ii) to lead to choice of methods
• EA can be the focus for a modified design workshop that
brings together staff and participants involved in all stages of
the intervention (e.g. use of SPIF)
• Helps develop a stronger theory of change
• Can strengthen monitoring and needs for other information
• Can defuse suspicions about evaluations
• Can be very useful when a Phase I has been completed and a
Phase II has been proposed, building on an evaluation
• Allows ‘lessons learned’ from Phase I to be properly
addressed
13. Experience from using EAs (i)
• Generally EAs have been a good thing:
– Improved usefulness and quality of evaluations: an advance on when
evaluator arrived at the end of the project and finding no means to
evaluate
– Early EAs dependent on logic models and linearity, now some signs
they are being broadened
– An opportunity for an early engagement with stakeholders, i.e. more
participation
– Some evidence of improvements in project outcomes as well as design
– More resources applied up front helps address later problems
14. Experience from using EAs (ii)
• Some of the difficulties:
– Clash of work cultures between design and evaluation professionals –
working to different incentives and time scales
– Issues of how far the evaluation ‘tail’ wags the design ‘dog’, leading to
some ethical issues
– Have to be prepared for ‘cats’ put among ‘pigeons’ if there are
significant gaps in design; does it mean intervention is stopped ?
– Evaluators must not get too seduced by what EAs can achieve,
especially if original intervention design is weak
– EAs will not work everywhere and must always be light touch - there
will be a budget constraint
– Other techniques may be more appropriate (e.g. DFID approach
papers)
15. Linking to Evaluation Methods
• Using the starting point of Stern et al (2012) Broadening the
range of designs and methods for Impact evaluations, DFID
working Paper No 38.
– Selection of appropriate evaluation designs has to satisfy three
constraints or demands:
– Evaluation questions
– Programme attributes
– Available evaluation designs
16. Some criteria for choice of methods based on
the results of the EAs (criteria will interact)
• Purpose of the evaluation
• Level of credibility required: what sort of decisions will be
made on the basis of the evaluation?
• What does the agency know already, i.e. nature of existing
information and evidence
• Nature of intervention and level of complexity
• The volume of resources and nature of capacity available to
carry out the evaluation
• Governance structure of the implementing agency and
relationship with partners
17. Purpose of the evaluation
• This is the overarching framing question (so EA can make this
clear)
• Relates to the position of the intervention in the agency’s
planning structure and how evaluation has been initiated
• Any special role for stakeholders
• Is the evaluation being implemented for accountability,
learning or ownership purposes or for wider process
objectives
• Nature of topic: project, country, thematic, global, programme
• To set up an extension of an intervention
18. Level of credibility of evaluation results and
decisions to be made
• How does the decision maker need to be convinced? Independence
of the process ?
• How will the evaluation be used? What sort of evaluation
information convinces policy makers?
• What is the nature of the linkages between results and
intervention:
– Attribution
– Contribution
– Plausible attribution
• If attribution is required with a need for a ‘yes/no it works/or not’
decision, then have to choose an impact evaluation
• If contribution is required, then can use contribution analysis
• If ‘plausible attribution’ is required then can use an outcome
summative method.
19. Other common observations on method choice
(relates to criterion of credibility)
• Experimental: demonstrates counterfactual, strong on
independence, requires both treatment and control
• Qualitative: strong on understanding, answers ‘why?’ ,
difficult to scale up findings
• Theory based and realistic evaluation: compatible with
programme planning, strong emphasis on context, requires
strong ToC
• Participatory: provides for credibility and legitimacy, enhances
relevance and use, can be time consuming
• Longitudinal tracking: tracks changes over time and can
provide reasons for change, can be resource intensive
20. What does the agency and its partners already
know ?
• No need to repeat evaluations if they do not add to the
agency’s ability to take decisions (value of DFID writing
approach papers)
• Role of information banks outside the agency (e.g. systematic
reviews, research studies); external validity
• Have all stakeholders been involved with information
gathering at the design stage
• How strong is the M&E, will the ‘M’ be useful for the ‘E’
• Have worthwhile decisions been made in the past on existing
information, good enough for sound design
• Is some form of comparison group required ?
21. Nature of the intervention and level of
complexity
• Key question on complexity is: what is the level of complexity/
reductionism at which an intervention is implemented and an
evaluation can be carried out
• Do the findings of the evaluation provide the basis for going
ahead to make a decision ?
• If complexity is addressed in design through multiple
intervention components, some where the n=1 (addressed to
governments), some where n=thousands (addressed to
children), then different evaluation methods can handle this.
• But, what do we know already that allows the evaluator to
compromise on complexity ?
22. Resources and capacity
• Much choice comes down to the budget line, what the
evaluation staff know and how much they are willing to take
risks on unfamiliar methods (e.g. realist evaluation) and the
time lines they work to
• There are opportunities for methods to be applied differently
based on criteria already mentioned.
• Some agency staff describe the ‘20 day’, ‘30 day’ etc.
evaluation method, defined by the resources they have
• This is why the ‘outcome summative’ method is so popular
and why efforts should be made to improve it.
23. Governance Structure of the Agency
• Always remains a key issue as structure often inhibit risk
taking by the evaluators
• Role of the governing body and executive varies in terms of
what evaluators can do.
24. Importance of strengthening the ‘outcome
summative’ evaluation
• Still remains the most common evaluation method (over 75%
of evaluations ?) but not much covered in recent literature
• Large element of evaluator’s judgement involved, familiar,
convenient, inexpensive
• But considering other factors for choice it can become the
best choice: plausible attribution, aligned closely with other
information sources, acknowledges deficiencies in addressing
complexity, borrows ideas from other more rigorous
techniques such as some form of comparison group of
retrospective baseline.