Who is listening to who, how well and with what effect? By Daniel Ticehurst

488 views

Published on

“The purpose of this paper is to stimulate debate on what makes for good monitoring. It draws on my reading of history and perceptions of current practice, in the development aid and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some good practice and relevant lessons. This is particularly instructive regarding the resurgence of the aid industry’s focus on results and recent claims about scant experience in involving intended beneficiaries and establishing feedback loops. The main audience I have in mind are not those associated with managing or carrying out evaluations. Rather, this paper is aimed at managers responsible for monitoring (be they directors in Ministries, managers in consulting companies, NGOs or civil servants in donor agencies who oversee programme implementation) and will improve a neglected area.” (Daniel Ticehurst)

Published in: Self Improvement
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
488
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Who is listening to who, how well and with what effect? By Daniel Ticehurst

  1. 1. WHO IS LISTENING TO WHO, HOW WELL AND WITH WHAT EFFECT? 1 DANIEL TICEHURST OCTOBER 16TH, 2012 “Just weighing a pig doesn’t fatten it. You can weigh it all the time, but it’s not making the hog fatter.” President Obama. Green Bay town hall meeting, June 11th 2009 http://pifactory.wordpress.com/2009/06/16/just-weighing- a-pig-doesnt-fatten-it-obama-hint-on-testing/1 Project Director, Performance Management and Evaluation, HTSPE Ltd.
  2. 2. ACKNOWLEDGEMENTSI’d like to thank the following people for their comments on early drafts: Andrew Temu,James Gilling, Jonathan Mitchell, Rick Davies, Harold Lockwood, Mike Daplyn, DavidBooth, Simon Maxwell, Ian Goldman, Owen Barder, Natasha Nel, Susie Turrall andPatricia Woods. Particular thanks go to Martine Zeuthen for her support throughout and toLarry Salmen whose comments and writings encouraged me to start and keep going. Fortheir support to editing the first and final drafts, special thanks to Michael Flint, CliveEnglish and Sarah Leigh-Hunt. CONTENTSExecutive Summary..........................................................................................................................21. Introduction..................................................................................................................................7 a. What are Results?.............................................................................................................7 b. What are the practical differences between Monitoring and Evaluation?.......................82. The Starter Problem....................................................................................................................103. The Value of Monitoring in Understanding Beneficiary Values .................................................144. The Need for Feedback Loops.....................................................................................................155. The Importance of Institutions....................................................................................................186. Main Observations of Current Practice ......................................................................................207. Conclusions .................................................................................................................................25 Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 1
  3. 3. EXECUTIVE SUMMARYI am a so called Monitoring and Evaluation (M&E) specialist although, my passion ismonitoring. Hence I dislike the collective term ‘M&E’. I see them as very different things. Ialso question the setting up of Monitoring and especially Evaluation units on developmentaid programmes: the skills and processes necessary for good monitoring should be anintegral part of management; and evaluation should be seen as a different function. I oftenfind that ‘M&E’ experts over-complicate the already challenging task of managingdevelopment programmes. The work of a monitoring specialist is to help instil anunderstanding of the scope of what a good monitoring process looks like. Based on this, itis to support those responsible for managing programmes to work together in followingthis process through so as to drive better, not just comment on, performance.I have spent most of my 20 years in development aid working on long term assignmentsmainly in various countries in Africa and exclusively on ‘M&E’ across the agriculture andprivate sector development sectors. Of course, just because I have done nothing else but‘M&E.’ does not mean I excel at both. However, it has meant that I have had opportunitiesto make mistakes and learn from them and the work of others.The purpose of this paper is to stimulate debate on what makes for good monitoring. Itdraws on my reading of history and perceptions of current practice, in the development aidand a bit in the corporate sectors. I dwell on the history deliberately as it throws up somegood practice and relevant lessons. This is particularly instructive regarding theresurgence of the aid industry’s focus on results and recent claims about scant experiencein involving intended beneficiaries2 and establishing feedback loops.3 The main audience Ihave in mind are not those associated with managing or carrying out evaluations. Rather,this paper is aimed at managers responsible for monitoring (be they directors in Ministries,managers in consulting companies, NGOs or civil servants in donor agencies whooversee programme implementation) and will improve a neglected area.Human behaviour is unpredictable and people’s values vary widely. In the developmentcontext, the challenges lie in how to understand the assumptions development aidprogrammes make about their beneficiaries. Ultimately, understanding behaviours anddecisions is what economics is all about.4 One of its tasks is to show how sometimesignorant we are in imagining what we can design to bring about change.5As Hayak explains, often our inability to discuss seriously what really explains underlyingproblems in development is due to timidity about soiling our hands going from purelyscientific questions into value questions.Both Hayek and Harford argue that a subtle process of trial-and-error can produce ahighly successful system. Certainly, there are no reliable models of behaviour that canpredict the results of development aid programmes with certainty. Development aidprogrammes are delivered in complex and highly unpredictable environments and thus2 People or institutions who are meant to benefit from a particular development initiative.3 The Sorry State of M&E in Agriculture: Can People-centred Approaches Help? Lawrence Haddad, Johanna Lindstrom and Yvonne Pinto. Institute of Development Studies, 2010.4 The Undercover Economist. Tim Harford. Abacus an imprint of Little, Brown Book Group 20065 The Fatal Conceit. Frederick Von Hayek. University of Chicago Press. 1991. Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 2
  4. 4. are associated with, and subject to, all kinds of ‘jinks’ and ‘sways’. These are oftenoverlooked and/or under-estimated in how they influence the results sought andultimately, how they are monitored and evaluated.Furthermore, as Rondinelli has stated, the way programmes are designed and monitoredsits uncomfortably with these complexities: “the procedures adopted for designing and implementing aid interventions often become ever more rigid and detailed at the same time as recognising that development problems are more uncertain and less amenable to systematic design, analysis and monitoring.” 6This highlights the need to find ways of understanding values: appreciating and learningabout, through feed-back, the opinions of beneficiaries in terms of their assessment of therelevance and quality of the aid received. For development to have an impact onpoverty reduction, the learning process must incorporate and use the perspectivesof beneficiaries.As Barder comments, and as a recent Harvard Business Review makes explicit,approaches to gauging client feedback are under-developed for two key reasons7:• Either beneficiaries and institutions are simply not asked for their opinions due to the emphasis of monitoring on, for example, enabling subsequent impact assessment and/or limiting its enquiry to ‘tracking’ effort and spend and, if they are,• The beneficiaries’ response to the performance of those providing support or services is seldom validated by them and/or fed back in the form of remedial actions. So why bother providing feedback in the first place?In the business world, realising that customer retention is more critical than ever,companies have ramped up their efforts to listen to customers. Many however struggle toconvert their findings into practical prescriptions. Some are addressing that challenge bycreating feedback loops that start at the front line such as Pfizer who uses approachessimilar to what development aid refers to as participatory story telling. Unlike developmentaid, however, the concept of participation is applied to allowing opportunities for front linestaff, in addition to their customers or beneficiaries, to tell their stories. Many companieshave succeeded at retaining customers by asking them for simple feedback-and thenempowering frontline employees to act swiftly on that feedback. The importance ofunderstanding staff and client or customer satisfaction was highlighted through thebalanced scorecard by Kaplan and Norton. 86 Development Projects as Policy Experiments. An Adaptive Approach to Development Administration. Development and Underdevelopment Series. Methuen and Co Ltd 19837 http://www.owen.org/blog/4018) 2010 and “Closing the Customer Feedback Loop”, By Rob Markey, Fred Reichheld and Andreas Dullweber, Harvard Business Review, December 20098 The Balanced Scorecard, developed by Robert Kaplan and David Norton in 1994, is a performance management tool used by managers to keep track of the execution of activities by the staff within their control and to monitor the consequences arising from these actions. Its balanced nature is how it is based around four perspectives: Financial (how do we look to shareholders?), Customer (how do we look to our customers?), Internal Business Process (What must we excel at?) and Learning and Growth (How can we continue to improve and create value?). Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 3
  5. 5. In the field of what is called Monitoring and Evaluation (M&E), few efforts try andunderstand behaviour. Often they tend to control expenditure and analyse other numbersand assess developmental change, but not so much values and opinions. I maintain thattrying to assess profound and lasting developmental impacts, in the absence of effectivefeedback loops, is impractical and of limited use. I further argue that this should be a corefeature of any monitoring system and, for practical management reasons, should not bethe sole domain of evaluation.I do not want to come across as being too black and white or dogmatic about whatconstitutes Monitoring as opposed to Evaluation. Although opinions differ as to whatextent Evaluation is independent of and/or relates to Monitoring, I find it useful to definethe main source of differences according to: a) the responsibilities and primary users ofthe information generated; b) their objectives; c) their requirements for comparativeanalysis (across time, people and space); and d), their reference periods.I see monitoring as having three inter-related parts: one that is about controlling expenditures in the context of cataloguing activities, that involves a participatory approach between those responsible for delivering the support and the finance team; another that tracks and analyses the reach of the support (ie, outputs) to intended beneficiaries these activities make available and how this varies; and one that gauges how and to what extent beneficiaries respond to this support – their assessment of its quality, relevance and ultimately usefulness – and also how this varies among them.The questions associated with the third component, I maintain, should not be held inabeyance pending an evaluation. Doing so begs very real questions as to the extent towhich managers are accountable for the quality and relevance of the support if they arenot listening to beneficiary opinion and response. Monitoring needs to be less thanperiodically surveying socio-economic impacts irrespective of approach but also morethan just cataloguing ‘outputs and activities’ and controlling ‘spend’. 9That what I refer to as the third component of any good monitoring system, others maysee as evaluation, gives me hope: that good monitoring practice involves getting outsidethe office, listening to beneficiaries and taking what they say on board, re-adjustingaccordingly and closing the feedback loop by letting them know what you have done withtheir feedback.Of course, evaluations do this as well. But understanding the values and behaviours ofbeneficiaries is an approach they both share. The difference between how monitoring andevaluation try to achieve this understanding is based on approach: who does this, howoften, why, with type of comparisons across people and places, and for whom?Monitoring can and should ultimately drive better performance and involve participatoryprocesses including, but not limited to, those between the intervention and intendedbeneficiaries (be they the poor themselves or institutions that serve them, depending on9 Such surveys perhaps need doing but not by those attached to programmes. Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 4
  6. 6. the outcome sought).10 Having the ability to listen and understand how and in what waysbeneficiaries respond to development programmes, and feeding this information back todecision-makers should not be judged by academic standards alone.I do not see the problem as an absence of tools or methods. They are there. BeneficiaryAssessments is one stand out example and is not new - the approach was first developedin the late 1980s and described in 1995.11 Another is Casley and Kumar’s BeneficiaryContact Monitoring (BCM), the equivalent in addition to beneficiary assessments, to what Idescribe as the third component part of a monitoring system.12 Such assessments, Iargue, can better enable improvements in the quality and usefulness of monitoring.I hope this paper provides a more balanced understanding and interest in Monitoring inthe face of a growing preoccupation with trying to evaluate results, including andespecially impacts. I’d like to believe that it could also help take advantage of a similarmovement by focussing more on the element of taking into account and learning from theviews of beneficiaries in assessing the value of investments in aid and how well they aredelivered. Doing this should be treated as an integral element of monitoring.I am a ‘fan’ of logframes and value the need to develop results-chains. The major strengthof the approach is that it provides an opportunity to collect evidence and think through aprogramme’s theory of change.However, it is important to distinguish between the logical framework – the matrix whichsummarises the structure of a programme and how this broken down among the hierarchyof objectives – and the approach – the process by, and the evidence with which, this isdefined. With this in mind, my qualms are about how logical frameworks are easy to be: a)mis-used through being developed without adequate participation of all stakeholders, notbalancing both logical thinking and deeper critical reflection and organisations filling in theboxes to receive funding; and b) mis-managed by not being an iterative reference point forprogrammes that keep up to speed with the realities through providing opportunities forbeneficiary assessments. There is nothing intrinsic to the process associated withdeveloping logframes that explains the need for a separate approach built around theoriesof change.13Currently, M&E processes and systems in public sector development aid at higher levels(Outcomes & Impacts) tend to be over-prescriptive and focussed on measuring pre-defined indicators within politically defined time periods – ie, elections. The reallychallenging questions are not how to do better monitoring, but rather (a) what are thebureaucratic pressures that lead to civil servants behaving in certain ways and (b) how tochange them. 14 Typically, political time periods of five years ‘force’ over-ambition and10 As with Michael Quinn Paton’s view on utilisation focussed evaluation, the bottom line objective for monitoring is how it really makes a difference to improving programme performance so as to enhance prospects for bringing about lasting change.11 “…….an approach to information gathering which assesses the value of an activity as it is perceived by its principal users; . . . a systematic inquiry into people’s values and behaviour in relation to a planned or on-going intervention for social, institutional and economic change.” Lawrence F. Salmen, Beneficiary Assessment: An Approach Described, Social Development Paper Number 10 (Washington, D.C.: World Bank, July 1995), p. 1.12 Project Monitoring and Evaluation in Agriculture by Dennis J. Casley , Krishna Kumar 1987. Johns Hopkins University Press.13 http://web.mit.edu/urbanupgrading/upgrading/issues-tools/tools/ZOPP.html14 Pers Comm Simon Maxwell Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 5
  7. 7. therefore the premature measurement of developmental results. The systems civilservants are obliged to set up are limited in providing information which can help to:A. Damp down politically-inspired over-ambition regarding outcomes and especially impacts that may inadvertently undermine the case for development aid;B. Safeguard against the Law of Unintended Consequences (or at least illuminate where these are happening through testing the assumptions during implementation); andC. Take account of alternative views (“theories of change”) especially those of beneficiaries and field staff regarding the quality and relevance of their support in order to help ensure the delivery of results – the true purpose of monitoring.This can be accomplished by establishing Feedback loops based on beneficiaryperceptions of the quality of project/programme services and ‘products’ (beneficiaries maybe the general population and/or local institutions) and their ‘results’. These in turnrequire:1) Opportunities to encourage often poor and vulnerable beneficiaries and front line staff to express their views;2) Sufficient real-time flexibility in project/programme design to permit incorporation of feedback;3) Commitment by managers and those responsible for the oversight of implementation to monitoring programme consequences, intended, positive or otherwise; and4) Assurances by those with authority to allocate resources at all to validate feedback among beneficiaries and then incorporate remedial actions in projects/programmes.The rationale of this paper is to explain some of the reasons why monitoring does not, yetcould with effect and at reasonable cost, do the following:1. Make effective contributions in delivering significant development results that matter most to beneficiaries; and2. Better understand the ‘theory’ underlying aid programmes through monitoring processes and establishing feedback loops, in real time, with beneficiaries.1515 This paper uses the term beneficiary in a collective sense: in relation to either the poor themselves (for aid programmes that deliver support directly to them); or the institutions that serve them (for programmes that support for example, partner country ministries, NGOs and markets, formal and/or informal). Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 6

×