Conducting High Impact Research: Building data ownership and improving data use
Data and MandE
1. The contribution of Monitoring and Evaluation to effective policy making through
useful data
Quality data has often been overlooked by decision makers as something trivial, too tedious and time consuming for
their necessary attention. Often, the notion of data and its usefulness is undermined, resulting in an ad hoc inefficient
use of resources as well as poorly implemented social development interventions and policies. Albeit, often data that
is presented is not simple to understand and is presented in a complex manner contributing to the lack in enthusiasm
in interpreting and using it. This has given rise to the increase usage of infographics which depict accurate, simplified
data that is easy to remember, yet due to the jadedness of data and its meaning, importance and relevance, insufficient
attention has been given to mandate the collection of useful data.
Decision-makers across the world need to base their decisions on information from reliable sources. This is imperative
to evidenced based decision making through which relevant, implementable and impactful policies and developmental
interventions can arise. They need to learn from the best evidence based knowledge and experience available and
they need to know what kinds of research and what type of data could help them make the right choices. Successfully
implemented and impactful polices and developmental interventions hinge on appropriate and well-designed
monitoring and evaluation frameworks which depend heavily on useful quality data. However, practical work in the
development field across sectors has often demonstrated that either too much data is collected with little regard for
the quality, or that data is missing, or that useless data is collected because monitoring and evaluation frameworks
that clearly articulate what type of data is necessary and relevant is missing.
Why is data missing and what is the importance of collecting relevant data?
The missing link between research, practice and policy is data that is accurate, valid and reliable. In Africa, there is a
large gap between the producers and consumers of knowledge, and research could have a greater impact on
development policy than it has had to date. Researchers as “knowledge makers” struggle to understand the resistance
to policy change despite clear and convincing evidence whilst policymakers as “knowledge consumers” lament the
inability of many researchers to make their findings accessible and digestible in time for policy decisions (Jones, 2011:
7).”1
Furthermore, the politics surrounding access to information and permission to collect useful data negatively
impacts the generation of useful research and subsequently current data. In other instances, slow bureaucratic
processes and political bickering can further negatively impact the publication of data contributing to the absence of
data to the general public. Additionally, fieldwork experience has highlighted the ignorance of monitoring and
evaluation officers in understanding the importance of collecting, collating, analysing, interpreting and presenting user
friendly data. Not only are records unkempt, but the nonchalant attitudes towards the value of data is disturbing.
Collecting relevant data is of paramount importance to sourcing the correct information regarding a socio-economic
challenge from the beneficiary point of view. Developmental interventions are still heavily carried out with a top down
approach with the target beneficiaries often complaining that what is given is not what is required to make the
necessary changes the policy seeks to address. Before or during the policy design phase, it would make sense to carry
out a situational analysis to understand the key target groups needs and priorities, the demographic factors and the
capacity challenges of government, NGOs (non-governmental organisations) and CBOs (community based
organisations) staff. There is sufficient evidence from my working experience in the development field that capacity is
severely lacking in implementing the required policies hence, the collection of timely, useful and relevant data is
frequently compromised. The diagram below illustrates an evidenced based policy pathway.2
1
Jones, B. 2011. Linking Research to Policy: The African Development Bank as Knowledge Broker, Series N° 131, African
Development Bank, Tunis, Tunisia.
2
Bowen, S., & Zwi, A.B. 2005. Pathways to “evidence-informed” policy and practice: A framework for action. PLoS Med, 2(7):
e166.
2. Source: Adapted from Bowen and Zwi, (2005)
The above pathway to evidence-based policy and practice involves five phases which are:
(1) policy idea,
(2) sourcing the evidence,
(3) implementing the evidence,
(4) identifying implementation gaps and
(5) monitoring and evaluation
The policymaking context is exceedingly political and rapidly changing and depends on a variety of factors, inputs, and
relationships. It is of paramount importance to ensure that monitoring and evaluation is part of the agenda setting of
the policy to guarantee that useful, relevant and current data is collected. This will enable the assessment of the overall
impact of the policy on the beneficiaries. Problem identification of the policy setting phase needs to be informed
through research and a situational analysis- the situational analysis can enrich or negate the data of existing research
and can necessitate the need of a national survey to update old or missing data. This process can further highlight the
correct target group for the policy intervention and areas that are an urgent priority. Often a policy is drafted and only
at the evaluation stage does it come to light that there is no useful data that can tell us what impact the policy had. It
is only at this late stage that decision makers realise the need for accurate data.
Once the evidence is used to draft a policy discussion paper to make a case for the policy, there needs to be a discussion
around what the data says and to ensure that the implementing partners of the policy have the capacity to implement
the policy. In developing countries, often it is discovered that great polices are drafted however, there is limited
capacity to implement it. Thus, in some countries, more structures are created wasting more financial resources rather
than addressing the capacity constraints. From my work in the development field, a significant capacity constraint is
work ethic coupled with the ignorance in understanding the job description of monitoring and evaluation officers when
it comes to data collection, collation, analysis and presentation. In addition, the absence of understanding the
importance of data by key management staff results in data collected in an ad hoc manner with no one being held
accountable for the failure of missing data.
Once the policy is in place and is being implemented, monitoring needs to occur to collect data for ongoing activities
and outputs and to inform management when the implementation diverges from the objectives of the policy.
Monitoring further supports management when implementation of the policy itself has become futile enabling
sufficient response time to address these issues bringing implementation back on track. An evaluation has to be
•Government
directive to design
and implement
policy addresssing a
particular need
Policy Idea
•Research
•Situational analysis
to understand
context on the
ground
•Identfication of
needs and priorities
•Collection of data
Sourcing the
evidence
•Data collation,
analysis,
interpretation and
presentation to
inform policy
Using the
evidence
• Identification of
capacity gaps of
implementing staff and
government partners to
ensure that the policy is
implementable
Identifying
implementation
gaps
• Continuing the data
collection through
monitoring and
assessing the policy
through evaluation to
understand how the
policy worked and what
the impact was on the
lives of the beneficiaries
Monitoring and
evaluation
EVIDENCE BASED POLICY PATHWAY
3. conducted to assess what impact the policy had on the target beneficiaries and what areas worked well and what did
not and what could be done to improve the policy design.
Important to note is that neither policy nor monitoring and evaluation can occur without useful quality data. It is data
that presents a case for a policy or development intervention and it is data that describes how, when and why the
policy or intervention worked or did not work and provides insights as to how to improve for the future. The value of
data cannot be underestimated and the mandate to ensure accountability for quality and timely collection of data
needs to be championed by managers. After all, without data we do not know what the reality is on the ground is and
how to address the issues that will result in impactful and meaningful change in the lives of the beneficiaries of the
policy or intervention.