Strategic performance management_system_and_strategic_planning_cycle
Upcoming SlideShare
Loading in...5
×
 

Strategic performance management_system_and_strategic_planning_cycle

on

  • 669 views

 

Statistics

Views

Total Views
669
Views on SlideShare
669
Embed Views
0

Actions

Likes
0
Downloads
29
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Strategic performance management_system_and_strategic_planning_cycle Strategic performance management_system_and_strategic_planning_cycle Document Transcript

  • April 2007 ­ UI Response to UNDP CoPERFORMANCE MANAGEMENT: A PRACTICAL GUIDE FOR MUNICIPALITIES Prepared byDr Malik Khalid Mehmood Assisted by  Katharine Mark  Ritu Nayyar­Stone  Sonia Ignatova UNDP, Bratislava RegionalCentreGrosslingova 35, 811 09BratislavaSlovak Republic March 2007 Project 08013-000Contract No. PS-2006/10
  • Table of ContentsINTRODUCTION AND SCOPE................................................................................................. 6 The Millennium Development Goals and their Role in this Guide .......................................................... 7 Focus of the Guide.................................................................................................................................... 8 Remainder of Guide.................................................................................................................................. 8STEP 1: BEGIN GOVERNING FOR RESULTS IN YOUR MUNICIPALITY(ORGANIZE THE EFFORT AND DETERMINE THE SCOPE) ........................................ 10 What is Performance Management? How Can It Help Local Governments? ........................................ 10 Focus on Service Outcomes ................................................................................................................... 11 Limitations.............................................................................................................................................. 11 Select the Scope and Coverage of the Performance Measurement Process ........................................... 13STEP 2: IDENTIFY OUTCOMES AND OUTCOME INDICATORS ................................. 18 Identify the service/program objectives, and customers......................................................................... 18 Select the important outcomes for your service/program....................................................................... 21 Categorize performance indicators ......................................................................................................... 26 Select outcome indicator breakouts (disaggregation) of each outcome indicator by key characteristics ................................................................................................................................................................ 28STEP 3: SELECT DATA COLLECTION SOURCES AND PROCEDURES..................... 33 Use agency records................................................................................................................................. 34 Survey Citizens (including Businesses).................................................................................................. 35 Data Quality Control .............................................................................................................................. 48 The Cost of Performance Measurement ................................................................................................. 51STEP 4: ANALYZE PERFORMANCE DATA....................................................................... 52 The Importance of Analyzing Performance Data ................................................................................... 52 How Performance Data Can Help .......................................................................................................... 52 Do Some Preliminary Work ................................................................................................................... 54 Examine the Aggregate Outcome Data .................................................................................................. 54 Examine "Breakout" Data ...................................................................................................................... 56 Examine Findings Across Indicators ...................................................................................................... 59 Make Sense of the numbers.................................................................................................................... 60STEP 5: REPORT PERFORMANCE RESULTS................................................................... 65 The Importance of Good Reporting........................................................................................................ 65 Internal Reporting................................................................................................................................... 65 External Reporting.................................................................................................................................. 69 Other Information that should be Included When Reporting Performance Data - In Both Internal and External Reports ..................................................................................................................................... 70 What If the Performance News Is Bad? ................................................................................................. 70 Dissemination of Performance Reports .................................................................................................. 71STEP 6: SET MUNICIPAL TARGETS ................................................................................... 77 The Importance of Setting Targets ......................................................................................................... 77 Set Targets .............................................................................................................................................. 79 Connect Local Targets to National Targets ............................................................................................ 83 2
  • Relation of Municipality Targets to Millennium Development Goals (MDGs) and Strategic Plans ..... 83 Concerns About Targets and Target Setting........................................................................................... 84 Final Comment ....................................................................................................................................... 84STEP 7: USE PERFORMANCE INFORMATION TO IMPROVE SERVICES ................ 86 The Importance of Using the Information .............................................................................................. 86 Undertake Service Improvement Action Plans....................................................................................... 86 Analyze Options/Establish Priorities...................................................................................................... 91 HOLD HOW ARE WE DOING? SESSIONS ....................................................................................... 93 Performance Budgeting .......................................................................................................................... 95 Capital Budgeting ................................................................................................................................... 99 Strategic Planning................................................................................................................................. 100 Motivate Your Employees.................................................................................................................... 100 Performance Contracting ...................................................................................................................... 101 Contribute to National and Regional Information Sources .................................................................. 102 Final Comment on Using Performance Information ............................................................................ 103STEP 8: BUILD MUNICIPAL CAPACITY.......................................................................... 105 Decide what training is required, to whom, and how much ................................................................. 106 FINAL WORDS ................................................................................................................................... 109APPENDIX A: Set of sample service outcome/quality indicators for a variety of municipal servicesAPPENDIX B: The millennium development goals and likely data collection sourcesAPPENDIX C: Sample customer questionnaire for municipal services, user survey for water service, patient feedback formAPPENDIX D: Sample procedures for rating certain quality elements of municipal services using trained observer ratings: street cleanlinessAPPENDIX E: Health status, mortality, country data for MDG 7, Target 10 3
  • PREFACEPreparation of this guide has been sponsored by the UNDP as part of an effort to support the achievementof the Millennium Development Goals.The Millennium Development Goals (MDGs) were derived from the United Nations MillenniumDeclaration, and adopted by 189 countries in 2000. The MDGs, as they are called, focus on the most critical aspectsof poverty, and the factors that contribute to both income and human poverty. They were re-confirmed in 2005 as theGlobal Development Agenda. The MDGs are a set of global goals whose achievement depends on theimplementation by countries of national MDG agendas aimed at achieving nationally adjusted global goals. TheMDGs have played a special role in developing and transition countries as they provide one set of goals that can helpthem and their cities as well set concrete targets and concentrate resources on meeting those. There are a large number ofother examples and resources around the world now available, with more and more national governments and citygovernmentschoosing to measure their performance and improve the services they provide their citizens.UNDP advocates the adoption by both national and local governments of an MDG agenda (see theMillennium Declaration, UN, 2000). While this Guide was first targeted at those countries and specifically themunicipalities of such countries which have decided to adopt an MDG agenda (for further discussion of the relationshipbetween national and local MDG agendas see Capacity Development for Localizing the MDGs, UNDP 2006), theperformance management methodology in this Guide can be used by all municipalities irrespective of whether theyhave adopted the achievement of the MDGs astheir goals.UNDP defines localizing the MDGs as the process of designing (or adjusting) and implementing localdevelopment strategies to achieve the MDGs (or more specifically, to achieve locally adapted MDGs targets). Thisimplies either adapting and sequencing the indicators and targets of existing localdevelopment strategies as needed, or elaborating a MDGs-based development strategy that reflects local priorities andrealities. For this approach to be successful, it should be locally owned and participatory/ UNDP and other UNagencies have produced Toolkits and Guides to help municipalities localize the MDGs and thus address regionaldisparities and marginalization at the sub-national level. (link to UNDPand UN-Habitat toolkits).There is a compelling logic to believe that unless the type of goals included in the MDGs are brought tothe local level (localized), national and global achievements will be skewed. National targets and indicatorsrepresent national averages. Achieving them would require targeted interventions in pockets of poverty which are oftenvery context specific. In order to impact the lives of people, goals, such as those in the MDGs need to be adapted to thecurrent level of development and translated into local realities, andembedded into local planning processes.A Special Note to Municipalities:Integrating the MDGs in the municipal performance framework can bring several advantages to your Cityand to your country, such as: - Make sure performance of services related to key development problems such as poverty, health, education and environment, is monitored and weak performance is identified and addressed; - Link your local development agenda to the national MDG agenda (if it exists) thus ensuring that 4
  • the work of your city contributes to nationally set goals (the setting of which you should have ideally participated); - Ensure that the work of your City contributes to the global development agenda and contributes to better and safer world which also benefits you city (poverty and the related crime and diseases, environment have no frontiers nowadays); - Increase the chances of benefiting from financial support to your City from central government or the donor community.This Guide is intended to support you in this effort. 5
  • INTRODUCTION AND SCOPEThis guide is aimed at helping all municipalities accomplish their goals by monitoring their ownperformance and using the information they get to improve the lives of all their citizens.Why establish a performance monitoring system to improve results? Many countries of Europe, the CISregion, and elsewhere in the world have initiated political and administrative decentralization processes. Decentralizationmeans transparency and accountability to taxpayers, as well as local governments thatstrive to continually improve the services they provide to their citizens.Local authorities play a vital role in improving the well-being of their citizens. They provide the mostbasic everyday services - such as solid waste collection, road maintenance, or access to water - as well as working in manyother ways to help their citizens out of poverty and improve their quality of life. Integrating a sound performancemeasurement process is only a first step. Often faced with very limited resources, poor quality infrastructure, andhistorically weak trust and communication between citizens and local government, local government officials and staffin Eastern Europe and elsewhere frequentlyfeel especially handicapped in trying to implement improved services that meet citizen needs.Setting up a system to monitor performance of municipality programs and policies will enable themunicipality and its agencies to: • Establish strategic plans, such as City Development Strategies • Regularly, monitor progress in meeting strategic plan and annual performance targets • Use performance budgeting as a means to links resources with results • Identify weak performing services and programs so that the necessary actions can be taken to improve performance • Allocate their own resources (not just city budget funds, but also city staff and equipment) in the most effective way • Help motivate public employees to continually focus on improving the quality of their services to citizens • Identify best practices in order to learn from high performing entities • Compare performance across localities, regions, and countries to help identify achievable targets for the municipalitys own performance and identify areas that need additional strengthening or resourcesAs suggested above, and as will be made clearer in this Guide, establishing a performance monitoring canhave multiple benefits for the work of a municipality, ranging from strengthening strategic planning,learning what service approaches work well and which do not improve budgeting and justifying theallocation of funds for initiatives to improve service delivery, to encouraging the reporting of results to citizens. TheGuide suggests steps a municipality can take both to improve collection of information on performance - "performancemeasurement" - and to use that information to help get better results -"performance management."The use of performance indicators and targets to improve conditions for citizens has increased over thelast decade as local and national governments around the world have become increasingly aware of thevalue of results-based decision-making. 6
  • THE MILLENNIUM DEVELOPMENT GOALS AND THEIR ROLE IN THIS GUIDEThe MDGs provide a clear framework for national and local development efforts taking a holistic multi-dimensional approach to poverty reduction and human development. They link the global, the national and local levelsthrough the same set of goals and provide a target-based, measurable framework for accounting for national and localdevelopment results. As already indicated, the MDGs, focus on criticalaspects of poverty, and the factors that contribute to poverty. These include essential areas such as health,education, access to drinking water, or adequate shelter. The eight MDGs listed below: Goal 1: Eradicate extreme poverty and hunger Goal 2: Achieve universal primary education Goal 3: Promote gender equality and empower women Goal 4: Reduce child mortality Goal 5: Improve maternal health Goal 6: Combat HIV/AIDS, malaria and other diseases Goal 7: Ensure environmental sustainability Goal 8: Develop a global partnership for developmentThe Goals are accompanied by "indicators" and "targets" which are more specific, to allow countries tofocus on particular areas that are important to monitor. Governments and municipalities that have chosento adopt an MDG agenda can adapt the global targets and indicators to national and local circumstances. They are shouldalso include additional outcomes and indicators important to the specific localconditions.All the MDGs, with associated targets and indicators, are listed in Appendix B. We have marked theindicators that might be of particular interest to local governments as part of the indicators they choose tomonitor. Some examples of MDGs and the indicators used to monitor their progress are:Goal 2: Achieve universal primary education Indicator 7. Proportion of pupils starting grade 1 who reach grade 5Goal 5: Improve maternal health Indicator 17. Proportion of births attended by skilled health personnelGoal 7: Ensure environmental sustainability Indicator 29. Proportion of population with sustainable access to an improved water source Indicator 30. Proportion of people with access to improved sanitationBecause of the international interest in MDGs, they have helped focus national attention on theimportance of identifying priority outcomes and using performance measurement tools to reach those outcomes. Mostinternational donors, including the UN system as a whole naturally support the MDGsand are aware of the importance - more broadly - of monitoring performance in key areas of public service. Theadoption of an MDG agenda and initiatives related to improving public service delivery can act as effective advocacytools and help the mobilization of resources for local development. As cities begin to establish their ownperformance measurement systems, it will be useful to have allies at the national and international level, who may beable to provide resources and data to strengthen the localefforts.This Guide will refer to the MDGs throughout, identifying associated resources for local governments,and suggesting opportunities for using some aspect of the MDGs. 7
  • This guide shows cities, first, how municipalities can use performance measurement and performance management to improve their services and their responsiveness to citizens; how they might use outcome indicators, including MDG indicators, as part of their performance monitoring; how they can set local targets for each of their indicators and reach the desired outcomes; and how they can also contribute to the efforts of the country itself to set and meet country-wide targets to mitigate poverty and improve the quality of life at the national level.FOCUS OF THE GUIDEThe material in this guide focuses on "Governing for Results." Most suggestions contained here areintended to encourage municipalities and their agencies to seek and use information on the results (the benefits) thattheir services and programs are achieving for their citizens. This means explicitlyconsidering the specific likely effects on their citizens when making policy and program decisions.Achieving results at as low a financial cost (that is, being efficient) is another important area of municipalperformance. However, this is a subject for another guide.This guide focuses on the process of regularly monitoring the outcomes of a municipalitys services. Byregularly, we mean at least annually for purposes such as budgeting, and more frequent monitoring of performance byindividual agency managers, such as quarterly or monthly. The focus is on developing a practical process thatmunicipalities can adapt to their own situation for regularly, and reliably, tracking their own progress on outcomes ofimportance to their citizens. The Guide does not discuss more in- depth evaluation studies, in which, if resources areavailable, the government can sponsor more in-depth look into how well a particular program or service is performingand why. Such in-depth studies can on occasion be very useful to municipalities. However, the regular monitoring ofoutcomes discussed herereport can often provide considerable information for such studies.This guide is complementary to two Guides that have been produced by UN Agencies: 1) Toolkit forLocalizing the MDGs (UNDP); 2) Localizing the MDGs: a Guide for Municipalities and Local Partners (UN-Habitat). The two publications deal with strategic planning which integrates and localizes the MDGs. As willbecome clear from this guide, performance management can be an element of an MDG- based strategic planning andimplementation processes, or any strategic planning process, and a tool for monitoring the implementation of strategicplans. The readers of this guide are therefore encouraged toalso consult at least one of the two documents mentioned above.REMAINDER OF GUIDEThis Guide will help you develop high quality performance management systems or improve the onesalready in place. The guide is organized in accordance with the basic steps for instituting performancemanagement in your local government, as listed below. Step 1. Organize the Effort and Determine the Scope Step 2. Identify Outcomes and Outcome Indicators Step 3. Select the Data Collection Sources and Procedures Step 4. Analyze the Performance Data Step 5. Report Performance Results 8
  • Step 6. Set Municipality Targets for Each Performance Indicator Step 7. Use the Performance Information to Improve Services and Establish Accountability Step 8. Build Municipal Capacity for Governing for Results This guide takes the reader through each of these steps, with each chapter corresponding to one step. The flow chart below depicts the usual order of these steps. It is important to note, however, that in many cases these steps are iterative. For example, after setting targets (Step 6) usually data will again be collected (Step 3) and analyzed (Step 4) in order to monitor progress, and then reported (Step 5), perhaps leading to the establishment of new targets (Step 6). STEP STEP 2 STEP 3 STEP 4 STEP 5 STEP 6 STEP 7 1 ChooseSelect Outcomes Data Analyze Report Set UseServic & Identify Collection Targetses Indicators STEP 8 - Build Municipal Capacity A number of Appendices are provided. These include: Appendix A: A candidate set of outcome indicators for a number of municipal services. Use these to give you ideas for the kinds of indicators you might want to select in your municipality. Appendix B: Millennium Development Goals with associated targets and indicators and likely data collection sources for indicators applicable to municipalities. Appendix C: Examples of a sample customer questionnaire for municipal services, an illustrative user survey for a water service, and a patient feedback form from a hospital in India. Appendix D: Procedures for rating certain quality elements of municipal services using "trained observer ratings" Appendix E: Data for a number of performance indicators to illustrate data that could help a municipality set its own targets for those indicators. 9
  • Step 1. BEGIN GOVERNING FOR RESULTS IN YOURMUNICIPALITY (ORGANIZE THE EFFORT AND DETERMINE THE SCOPE) WHAT IS PERFORMANCE MANAGEMENT ? HOW CAN IT HELP LOCAL GOVERNMENTS?Performance management - sometimes referred to as "governing for results" - is a system of regularlymeasuring the results (outcomes) of public sector programs, organizations, or individuals, and using this information toincrease efficiency in service delivery. Public officials need regular feedback on theeffectiveness of their services in order to make improvements, while at the same time the public wants toknow that the government is spending their tax money in the best way to meet citizens needs.What is the difference between Performance Measurement and Performance Management? - PerformanceManagement goes one step further, using the measurements of performance to manageservices better.Traditionally, this kind of information has been hard to get - emerging only piecemeal throughcomplaints or occasional anecdotes. Over the last three decades, performance management has become increasinglypopular way for governments around the world - both local and national governments - tomanage their programs and services to achieve the results their citizens expect. BENEFITS OF A PERFORMANCE MANAGEMENT SYSTEM TO LOCAL GOVERNANCE • Improving service quality and outcomes; • Improving resource allocation and justifying agency budgets or service cuts; • Making public agencies accountable for results to elected officials and the public; • Increasing the citizens trust in the local government; and • Making work more interesting and satisfying for public employees because of its citizen focus.Governing for Results has encouraged governments both to make the results they are seeking explicit andto design programs and policies to actively and directly seek those results. Monitoring progress towards those resultsprovides a constant feedback into the policy and implementation process to improve effortsto achieve their objectives.
  • 10
  • FOCUS ON SERVICE OUTCOMESPerformance management is based on a simple concept: a focus on service outcomes, or actual results,rather than only on the quantity of service that an agency provides. This implies assessing the performance ofthe government based on the implications of services to customers, not on physical outputs. The work of the localgovernment is measured by what the citizen or user of the service actually experiences - are the roads in good enoughcondition so that children can get to school in the rainy season? Do the children stay in school until they graduate? Dopregnant women visit the primary health clinic during pregnancy, and does that result in healthier babies? Is garbagecollected regularly, and doesthat have an effect on health?This simple idea, however, means that many people in local government need to think in a different way.It will not be enough to measure how many kilometers are paved, or whether the clinic has the right staff. It will beimportant to see what the results of those efforts are in order to know whether they are workingwell.Through tracking performance indicators, and clearly linking those indicators to the results that the localgovernments wants to see, the system provides decision makers with better information. With this informationthey can make better decisions—and show why they made those decisions. Usingperformance management, local governments can demonstrate their commitment to providing qualityservice.This way of thinking can proceed at several levels. For example, a city may believe it is very important toprovide a "safe, health, and clean environment for all the citizens of the municipality." That may lead to the identification ofa number of outcomes that will be sought in several different services: good quality service at the primary health clinic,public awareness about health hazards, better solid waste collection, clean water sources. Each of those in turn can requirea number of different outcomes. For example, the head of solid waste management may want to ensure that streetsare clean, citizens satisfied with collection service, landfills are well managed, and that the service has full costrecovery so that goodservice is sustainable. LIMITATIONS Municipal officials need to recognize important limitations of the performance information that would come from the steps discussed here. These include: • The regularly collected annual performance information discussed in this report will not tell you WHY the recorded performance levels were good or bad. (However, a well- designed performance measurement process can provide useful clues. For performance indicators that indicate unexpectedly low, or high, results, more in- depth evaluations will be needed to get at the causes). • Similarly, the performance information does not tell municipality officials what has to be done to correct problems identified by the performance measurement data. • Performance measurement information provides information about past performance. It does not by itself tell what future result will be. However, information on past performance provides a major source of information for estimating future results, such as needed for making budget, policy, and program decisions. 11
  • GET STARTEDDepending on their size, governance system, and capacity constraints, governments around the world areusing different tools to govern-for-results. One or more of the following approaches might be used to startdeveloping a performance management system in your city: Develop and track selected performance indicators in each service sector. Make policy decisions based on the information and disclose this information in city performance reports and its budget. Develop service improvement action plans in priority sectors. Apply performance management to the internal processes of the local government, for example to increase municipal revenue, or reduce the time it takes for citizens to register births or marriages. Implement a comprehensive performance management system in your city that combines strategic planning, setting goals and objectives for each service sector, citizen participation and using performance information. A municipality can choose to adopt an MDG agenda and integrate MDG Goals in its strategic plan. For each of the MDG Goals this municipality should set targets and indicators which reflect local circumstances.Whoshouldbeinvolved?Many stakeholders play a role in the process of implementing and using a performance managementsystem. Some of the key actors are described here.Mayor The Mayor should be a principal user of performance information especially in establishing major policies and in reviewing city programs and its budget. In addition, the Mayor will play a major role in setting the climate for the shift to a results-orientation. The Mayors support is important, for making sure that adequate resources are allocated to implementing and, later, sustaining the process..City Council The support of the council will be essential to the success of the enterprise, not only through the provision of funding when necessary, but also through underlying the importance of performance information by requesting it and using it. Elected council members will find outcome information to be very useful in carrying out their responsibilities, enabling them to more easily understand the impact of city services on their constituents, and to make decisions in their appropriation and oversight roles.DepartmentHeads The heads of different departments or institutions play a crucial role in facilitating and using performance data.National and Regional Government These entities should have their own performance management processes. Many of their agencies are likely to need performance data (such as on health, education and welfare) to make their own policy and program decisions - and to provide information that will enable them to set national targets (such as for MDG indicators). 12
  • National and regional agencies may also use the performance indicators as a basis for identifying local areas that require special assistance, training, and help in achieving equity in the country, or in identifying best practices that can be shared with other localities to improve service everywhere.International Donors Donor loans or grants sometimes stipulate the condition that infrastructure projects or grants-in-aid be subject to detailed performance monitoring, or tied to achievable results. For example, the World Banks output- based-aid involves delegating service delivery to a third party (private firm or NGO) under contracts that tie payment to particular outputs or results delivered. Other agencies like USAID, CIDA, and DFID also use performance monitoring for specific projects, own inter-agency performance, and in some cases link them to country plans. A number of international donors, including the UN system, are focusing assistance efforts on helping countries attain the MDG targets and may want to see the connections between municipal programs and those targets. These donors can provide valuable assistance in capacity building within municipalities, by providing training on performance management systems, and creating incentives for performance monitoring in service delivery.Non-Governmental Organizations Different types of NGOs can play two valuable roles in improving service delivery: (1) in providing important quality public services themselves -- in which case they should themselves have their own performance management process, and (2) playing a watchdog or advocacy role in increasing citizen awareness of their rights to better quality public services.Business Community Businesses are a major consumer of government services (such as water, transportation, and, economic development services). In addition, many services are delivered through contracts. To ensure quality services, the municipality can use performance contracts with these businesses. s Data for performance monitoring will, thus, also have to be obtained from, or at least with the cooperation of, these businesses.Citizens Citizens are the major consumers of public services. And they pay (via their tax payments or fees), for many services. Citizens are also a major source of information needed to evaluate services and are an important source for identifying the outcomes that should be tracked, both annually and for strategic plans.SELECT THE SCOPE AND COVERAGE OF THE PERFORMANCE MEASUREMENT PROCESSWhich services should you include?You might choose to start with one service (or program), several, or cover all municipal services andprograms. It is recommended here that you attempt to cover all your services so that all municipality staff are encouraged tofocus on results. Realistically, however, you may need to start with a few services at a time, so that the successes in onearea can serve as motivation to introduce performance managementmore widely. 13
  • There are several different ways to decide where to start. One method is to identify the departments thatmight be "easiest" - for example, areas where data is easily available, or some inexpensive improvements are most likely toyield rapid results. Another might be to start with departments in which the leadershipis already very interested in adopting the new approach, which is also likely to make the pilot efforteasier.Another approach to choosing a starting place can be to look at citizen priorities. To do so might slowdown the process, but does identify an area that is likely to yield improvements in citizen satisfaction in the short term.International donors are often extremely supportive of such consultations with citizens, and can help fund such efforts.Some more detail about such an approach is provided in Step 7 on Using Performance Management, under the descriptionof Strategic Planning. If there is strategic planning, thestrategic goals identified will determine the services that are to be monitored.The Millennium Development Goals can also provide guidance on what outcomes to select. A step bystep approach on how a municipality should select services that contribute to the MDGs might be asfollows:Step 1 - Review the MDG Goals and select those global Goals which are relevant to local realities (in a transition or developing country context, all should normally be relevant)Step 2 - Identify the services which are related to those GoalsStep 3 - Adapt the indicators to local circumstancesStep 4 - Review the national (if they exist) and global indicators related to the selected (and adapted) targets and select those which are relevant to local circumstancesStep 5 - Identify additional outcomes and indicators which are relevant to local circumstances and contribute to the MDG goalsStep 6 - Collect baseline data for all indicatorsStep 7 - Set for each indicator targets appropriate for the locality, given the priorities, citizen preferences, needs, and available resources, and also bearing in mind as possible benchmarks, such sources as national MDG targets, performance in other localities, and adapting those targets to local circumstancesStep 8 - Identify non-MDG related city goals, identify indicators and set targetsLocal governments in different countries are responsible for different types of functions. In addition, insome countries those functions may be subject to change, especially where a process of decentralization may beunderway. It makes sense to start with services that are fully under the control of the locality, because that is whereimproved decisions will have the greatest impact, but there have also been instances when performance measurement canalso be applied to functions that are mixed - i.e., shared between central and local government -- or even largely centralfunctions. It can be especially difficult in sharedfunctions to ascertain the effectiveness of the service, and measuring performance can provide useful input on thevarious aspects of the service. Thus, the central government might gain information about how well local governmentsare carrying out a task, or performance information might show that centralfunding or regulations arent yielding the results that were expected.Exhibit 1-1 provides a list of exclusive and shared functions in Albania as of 2006. It can be noted thatwhile most cities that have used performance management in Albania have focused first on exclusive functions, suchas solid waste collection, parks, street cleaning, or water provision, there have also beenseveral efforts to measure performance in areas of shared functions, such as in education and socialassistance. Some of those examples will be provided in this Guide. Exhibit 1-1. Functions of Communes and Municipalities in Albania 14
  • Exclusive Functions of Communes and Municipalities I. Infrastructure and Public Services a. Water supply b. Sewage and drainage system and [flood] protection canals in the residential areas; c. Construction, rehabilitation and maintenance of local roads, sidewalks and squares; ç. Public lighting; d. Public transport; dh. Cemeteries and funeral services; e. City/village decoration; ё. Parks and public spaces; f. Waste management; g. Urban planning, land management and housing according to the manner described in the law. II. Social Cultural and Recreational Functions a. Saving and promoting the local culture and historic values, organization of activities and management of relevant institutions; b. Organization of recreational activities and management of relevant institutions; c. Social services including orphanages, day care, elderly homes, etc. III. Local Economic Development a. The preparation of programs for local economic development; b. The setting [regulation] and functioning of public market places and trade networks; c. Small business development as well as the carrying out of promotional activities, as fairs and advertisement in public places; ç. Performance of services in support of the local economic development, as information, necessary structures and infrastructure; d. Veterinary service; dh. The protection and development of local forests, pastures and natural resources of local character. IV. Civil Security a. The protection of public order to prevent administrative violations and enforce the implementation of commune or municipality acts; b. Civil security. Shared Functions of Communes and Municipalities a. Pre school and pre university education; b. Priority health service and protection of public health; c. Social assistance and poverty alleviation and ensuring of the functioning of relevant institutions; ç. Public order and civil protection; d. Environmental protection; dh. Other shared functions as described by law.Source: Law on Organization and Functioning of Local Governments, No. 8652, dated 31.07.2000A Good Approach: Establish a Municipal Steering Committee and Working GroupsOnce you have determined the scope of your performance measurement process, a good way to beginimplementing it is to establish a high-level, across-government, Steering Committee to oversee the process. TheSteering Committee can then establish a Working Group to lead work on the details ofimplementation. The Steering Committee should include such persons as: - A representative of the Mayor - A high-level official of the finance/budget office - A high-level official of the human resources (personnel) office - Several department heads - Information technology high-level official 15
  • The Working Group should have representatives from the departments carrying out or overseeing thework in question, from the financial department, and from a number of related areas. Encourage eachparticipating municipal department to have its own working groupExhibit 1-2 provides examples of the types of people that might be included in these department workinggroups. Such groups should consider including a representative from outside the government to obtain abroader, consumer, perspective. Exhibit 1-2. Example of Working Group CompositionSolid Waste Working Group Head of Technical and Land AdministrationCity Manager Municipal Services ManagerSocial economic department Head of Planning and InformationDirector of financial department Construction and Design TeamHead of the solid waste collection company Municipality administratorsSanitation team Representatives from business firmsHealth offices administration Financial Management teamMunicipality level social teamMunicipality level economic team Construction and Maintenance of Asphalt andEnvironmental protection authority Gravel Roads Working GroupRepresentative from an NGO or citizen group Technical and Land Administration Departmentinterested in city cleanliness Technical team Administrative Support servicesEducation Working Group Urban Development and Construction Bureau -Deputy Mayor local branchSchool principalRepresentative from the parent-teacher Municipality-Wide Working Groupassociation Deputy MayorRepresentative from the Education Committee Head of public works departmentof the City Council Director of finance department Other department headsLand Management Working Group Representatives of selected NGOsWhat should be the functions of these working groups? The government-wide working group should have such task as: - Developing a government time table for implementation - Identifying and defining the types of indicators to be included - Identifying staff training needs and making arrangements for the initial training efforts - for both management and line personnel. - Develop a communication strategy - Communicate with local government bodies, City Council, civil society and ordinary citizens - Arranging for the development of guidelines for major data collection procedures - In general, guiding such steps as those described in this guide in Steps 2-8 16
  • The department working groups should have similar tasks but focused on their specific needs. 17
  • Step 2. IDENTIFY OUTCOMES AND OUTCOME INDICATORS IDENTIFY THE SERVICE/PROGRAM OBJECTIVES, AND CUSTOMERSFor each service or program included in the municipalitys performance measurement process, themunicipality should start by identifying the services objectives. What is the service intended to do forthe city and its citizens? What are the primary benefits desired?A good statement of objectives should identify the key intended benefits and who are the intendedbeneficiaries (such as all the municipalitys citizens or some particular segment). This process should also identify thepossible unintended effects, both beneficial and negative effects. Each of these will helpformulate the outcomes that will be tracked. Ask such questions as: • Who benefits from the program - in what ways? Which demographic groups are particularly affected by the program? • Who might be hurt by program activities - in what ways? • What persons who the program does not directly target might be significantly affected by the program? • Is the public-at-large likely to have a major interest in what the program accomplishes?Exhibit 2-1 provides examples of the objectives and affected citizen groups for a few services. Exhibit 2-1. Service, Objective, and CustomersProgram or Service Objective Customers or UsersSolid waste collection Clean city and neighborhoods City residentsSchools Better education Children and parents; employersFinancial Department Increase municipal revenue All municipal services and citizensRoad maintenance Safe and rideable roads City residents and city visitorsAll services Improved collection of fees All municipal services and citizensLand management Adequate housing City residents City residentsWater authority Healthy population Elderly people, their families,Social services Healthy and secure elderly their caretakersHousing peopleHealthNGOsExamples of key customer groups in different programs are shown in Exhibit 2-2. 18
  • Exhibit 2-2. Examples of Key Customer Groups in Different ProgramsA road construction program Citizens and transportation companiesA water treatment plant Citizens, businesses, and visitors to the communityA vocational school program: Students, parents, and local businesses who recruit the schools graduatesA sports facility Athletes and the general publicA municipal park Adults, children, and senior citizens in the community, and visitors SELECT THE OUTCOMES TO TRACK FOR EACH SERVICEIdentifying the specific outcomes that you will try to achieve, given the services objectives, is one of themost important parts of this process. What isanoutcome? An outcome is the result of a service, from the point of view of the citizens, especially the customers for the service. We can start by thinking about the various steps that go into delivering a service: First, there are inputs - these are the resources we use, for example, money or employees of the municipality. Second, there are outputs - these are the products that the city department, contractor, or agency produces, such as kilometers of road repaired, tons of garbage collected. Third are the outcomes - these are the results of the service: the roads are in good condition, the city streets are clean, etc. It is useful to identify two primary levels of outcomes; "intermediate" outcomes and "end" outcomes. We can think of the higher outcomes - the "end outcomes" or "ultimate" outcomes • as the real purposes of what we are doing. For instance, the improved health of citizens that comes from a clean city, or the ability to go to work or school quickly and safely that is made possible by good roads. We call these "end outcomes." An intermediate outcome is also a result, not just an output, but the accomplishment of something that is likely to lead to an end outcome.Exhibit 2-3 diagrams the "causal" relationship among these categories. Funding and people are needed toimplement activities. Those activities are expected to produce outputs that are expected to lead tointermediate outcomes and then to end outcomes.Exhibit 2-3. Building Towards Results 19
  • End Outcome Intermediate Outcomes Outputs InputsBelow are some sources of information that can help you identify what outcomes your municipalityshould track. Each source is likely to have its own perspective on what is, or should be, important to citizens and thecommunity a whole. Most, probably all, services and programs will each need to consider multiple outcomes inorder to be comprehensive as to what is important to citizens and thecommunity. • Discussions or meetings with customers and service providers • Customer complaint information • Legislation and regulations • Community policy statements contained in budget documents • Strategic plans • Program descriptions and annual reports • Discussions with upper level officials and their staff, to identify future directions, new responsibilities, new standards at the national or regional level • Discussions with legislators and their staff • Input from program personnel • Goal statements by other governments for similar programs • Poverty Reduction Strategy Document (or other national strategy) • Sector Strategies • Regional Development StrategiesYou can obtain information on program results through meetings with customers (known as "focusgroups"); meetings with program staff; and meetings with other local government personnel.Exhibit 2-4 provides several different examples of outputs, intermediate outcomes, and end outcomes. Exhibit 2-4. Examples of Outputs, Intermediate Outcomes, and End OutcomesOutput Intermediate Outcome End OutcomeRoads are repaired Roads are in good condition Citizens can reach work, 20
  • school, markets and servicesClinics are built and staffed Pregnant women visit clinic Children are born healthyGarbage is collected Neighborhoods are clean Lower incidence of diseaseCustomers are billed Fees are paid Cost recovery enables adequate services to be providedWater is supplied Citizens have access to water Citizens are healthySchools have desks and Children attend school Children are educatedtextbooks How are outputs and outcomes different? An important element of performance measurement is that it differentiates between outputs and outcomes. In measuring what government does, the traditional focus has been on tracking expenditures, number of employees, and sometimes their physical outputs. An Output or an Outcome? Sometimes people are confused about the difference between an output and an outcome. A key question is how likely it is to be important to citizens and service customers. Outputs are usually the physically things that services and their employees did (e.g., paved 200 square meters of road), while outcomes are what those things are expected to accomplish from the viewpoint of the "recipient" of the service (e.g., road condition is good). The outcome focus of performance measurement, however, connects performance to the benefits of services for citizens and the community. For example, performance measurement is concerned not with the number of teachers employed, but with the reduction in student dropout rate. Of course, focusing on outcomes does not mean that you neglect outputs. Instead, a focus on outcomes provides a framework for you to analyze outputs in a meaningful way. In the above example, hiring more teachers or increasing the number of lessons taught does not necessarily reduce the number of students dropping out of school. It may mean that you also need special programs to improve the employment opportunities for parents for students who are dropping out of school. Or you might set up a preventive counseling program to help those students who are the most likely to drop out. Measuring the performance of programs targeted at decreasing the dropout rate would then tell you how successful or unsuccessful these programs are. Another example: Focusing on the percentage of your municipalitys roads that are in good, rideable condition, rather than on the number of square meters of road maintained, helps identify specific areas that most need maintenance attention.SELECT THE IMPORTANT OUTCOMES FOR YOUR SERVICE/PROGRAM 21
  • It is not always obvious what outcomes should be selected, but the best way to decide is to think aboutwhat is most important. Whoever is responsible for selecting the outcomes should brainstorm about outcomes beforemaking a final decision. Remember, there can be several "layers" of outcomes - ranging from the end outcomes (e.g., the healthof citizens) to a number of intermediate outcomes (e.g., access to water, sufficient water pressure, and adequate costrecovery). Usually, you will want to track both intermediate outcomes and end outcomes to determine both whether theend outcome has been reached and which intermediate outcomes have been successful or might need to be adjusted. Endoutcomes are more important, but intermediate outcomes provide services/programs with earlier information onprogress and, thus, usually need to be tracked.One important source of outcomes that should not be neglected is consultation with end-users, that is, those who willbenefit from the service. This might include meetings with citizens in differentneighborhoods, or the use of information from a citizen survey. How do you "brainstorm" for outcomes? Brainstorming is a technique to help a group think creatively to come up with new ideas. The central "rule" is that everyone should say what he or she thinks openly and without inhibition. No one will be critical. A good way to start might be to have a large piece of paper (maybe on a flip chart) and ask everyone to suggest outcomes they would like to see. Go ahead and shout them out. Write everything down. After the "brainstorm", the group will discuss the choices to decide which outcomes they will focus on.Multiple outcomes are to be expected for any public service.Here are some examples of some outcomes that have been selected in cities in Eastern Europe for selectedservices:Solid waste collection:- Areas around collection points are rated "clean"- Citizens are satisfied with cleanliness in their neighborhoods- Full cost recovery via collection of garbage feesWater- More households are connected to the city water system- Citizens feel they have enough water when they need it- Increased water quality- Full cost recovery through water tariffsOutcomes contributing to the Millennium Development GoalsSome local governments may want to consider in what ways their local functions contribute to reachingthe Millennium Development Goals. As a starting point, it is useful to note that most local government services areessential contributions to the Millennium Development Goals, although they are not specifically identified by theGoals, the Targets, or the Indicators. For instance, maintaining the adequacy of roads - a key local function in mostcountries - is essential to ensuring access to many primary services (clinics, schools, markets, water). AppendixB provides an annotated version of the MDGs,suggesting ways in which local services might be contributors to the Goals. 22
  • Local governments may choose to start their performance management efforts in areas related to one ormore Millennium Development Goal, choosing outcomes over which the local government has somecontrol and that are important to the community. An example might be: Goal 2: Achieve universal primary education Supporting outcomes: • Good condition roads to allow access to schools • School facilities are in good condition (in countries where local governments are responsible for school facilities)IDENTIFY AND DEFINE THE SPECIFIC PERFORMANCE INDICATORS THAT WILL BE MEASURED -FREQUENCY OF COLLECTION , UNITS, LEVEL OF DISAGGREGATIONFor each outcome you identify, you also need one or more specific outcome indicators, specific ways tomeasure progress toward that outcome. Outcomes Indicators are at the heart of performance management. They are theelements you will measure and track to see whether your local government is achieving the results it wants. For eachoutcome that is sought, measurable indicators need to be selected that permitthe government to assess the progress being made towards the outcome.An indicator must first of all be measurable. Not all outcomes of programs are measurable, or at leastdirectly measurable. You need to translate each outcome of the program into performance indicators thatspecify what you will measure. In some cases you may want several indicators for one outcome.Typically indicators start with the words "number of" or "percent of." In some cases, you will want tomeasure both the number and the percent; for instance you might want to measure the total number of children who havereceived vaccinations, and also the percent of children that represents, so that is clearhow many children are still at risk.Sometimes, when you cannot measure directly a particular outcome, you can use a substitute indicator, aproxy indicator. For example, for outcomes that seek to prevent something from occurring, measuring the number ofincidents prevented can be very difficult, if not feasible. Instead, governments track the number of incidents thatdo occur - a proxy indicator. These proxies are not ideal but they can be theonly practical approach.Each indicator needs to be fully and clearly defined so that data collection can be done properly andproduce valid data. For example, consider the important indicator: "Proportion of population with sustainable access to an improved water source." What do the words "sustainable," "access," and "improved water source," mean? Different people responsible for collecting the data for the indicator can easily define each of these terms differently. Next year different staff might interpret the terms differently than those collecting the data last year.An excellent source for definitions, especially for MDG indicators, is that provided by the United NationsDevelopment Group. The block below presents the definition provided by UNDG for the water-accessindicator used in the above example. The MDG indicators most likely to be directly relevant tomunicipalities are listed in Appendix B. 23
  • Exhibit 2-5Definition of the Indicator "Proportion of Population With Sustainable Access to an ImprovedWater Source, Urban and Rural"The percentage of the population who use any of the following types of water supply for drinking: pipedwater, public tap, borehole or pump, protected well, protected spring or rainwater. Improved water sources do notinclude vendor-provided water, bottled water, tanker trucks, or unprotected wells andsprings.Access to safe water refers to the percentage of the population with reasonable access to an adequatesupply of safe water in their dwelling or within a convenient distance of their dwelling. The Global Water Supplyand Sanitation Assessment 2000 Report defines reasonable access as "the availability of 20 litres per capita per day at adistance no longer than 1,000 metres." However access and volume of drinking water are difficult to measure, sosources of drinking water that are thought to provide safewater are used as a proxy.Source for this and the other MDG indicator definitions: Indicators for Monitoring the Millennium DevelopmentGoals. Definitions, Rational, Concepts, and Sources. 2003. New York: United Nations. This publication isavailable at undp.un.org -in six languages.Such available definitions should provide your municipality with a very good starting point. However, asthe definition in the block indicate, at least some tailoring to own local situation is likely to be necessaryto fully define each indicator.Appendix A provides a candidate set of outcome indicators for a number of typical municipality services.(The MDG indicators included in Attachment B are included in the set of candidate outcomeindicators presented in Appendix A.)You need to consider several factors when selecting performance indicators. Exhibit 2-5 suggests a set ofcriteria for selecting them. Rate each indicator according to these criteria. Exhibit 2-5. Criteria for Selecting Performance IndicatorsRelevance. Choose indicators that are relevant to the mission/objectives of the service and to what theyare supposed to measure.Importance/Usefulness. Select indicators that provide useful information on the program and that areimportant to help you determine progress in achieving the services objectives.Availability. Choose indicators for which data can likely be obtained and within your budget.Uniqueness. Use indicators that provide information not duplicated by other indicators.Timeliness. Choose indicators for which you can collect and analyze data in time to make decisions. 24
  • Ease of Understanding. Select indicators that the citizens and government officials can easilyunderstand.Costs of Data Collection. Choose indicators for which the costs of data collection are reasonable.Exhibit 2-6 is an example of indicators that you could use as a starting point for two different programs.This exhibit also contains a fourth major category of performance indicator: efficiency indicators. These are usuallydefined as the ratio of the cost of a particular service to the amount of product that wasproduced with that amount of expenditure.The unit of product traditionally has been one of the outputs. The efficiency indicator usually is of theform "cost per unit of output." However, a sole focus on output efficiency can tempt employees to speed up their work,sacrificing quality. A municipality that also collects outcome data can in many cases thenuse a much more true indicator of efficiency: "cost per unit of outcome." For example, the public works agency can then inaddition to tracking cost per meter of road repaired also track "cost per meter of roadrepaired that was improved from an unsatisfactory condition to a good condition." Exhibit 2-6. Illustrative Performance Indicators City of Bangalore, India Water Supply Environment Input Input Cost Cost Staff Staff Materials, equipment Materials, equipment Output Output Average number of hours of water supply per Number of persons per hospital bed, including both day government and private sector hospitals Ratio of number of stand-posts in slums to total Percentage distribution of waste water treated by slum household each method Daily consumption of water in litres per capita Percent of waste water treated and re-cycled for non- per day (LPCD) consumption purposes Outcome Outcome Percentage of water lost during distribution Noise pollution in decibels at selected locations total water supply Percentage of population suffering from pollution- Average citizen satisfaction rating with water resultant respiratory diseases quality Percentage of population suffering from pollution- Percentage of households having safe or resultant water-borne diseases potable water source located within 200 Pollution load per capita per day meters of the dwelling Efficiency Efficiency Cost of installing water harvesting equipment Average cost, per kilolitre, of waste water treatment (per kilo litre) Cost per person treated in hospitals by pollution- Cost per metered household resultant diseases 25
  • Adapted from "Bangalore City Indicators Programme." (December 2000). Government of Karnataka, Bangalore MetropolitanRegion Development Authority CATEGORIZE PERFORMANCE INDICATORSIt is good practice for a municipality and its agencies to categorize each of its indicators by suchcategories as those given above. This will help users of the performance information keep in mind therelative importance to the city and its citizens of the individual indicators.Input, output, and efficiency indicators are relatively familiar to program managers. Governmentsregularly use them to track program expenditures and service provided. Indicators of outcomes are much rarer even thoughthey are more helpful in determining the consequences or results of the program.Categories of performance indicators are described below, and examples are shown in Exhibit 2-7. It is important for you to recognize the differences between the following categories of information: Inputs Input data indicate the amount of resources (amount of expenditures and amount of personnel used in delivering a service. Outputs Output data show the quantity of work activity completed. A programs outputs are expected to lead to desired outcomes, but outputs do not by themselves tell you anything about the outcomes of the work done. To help identify outcomes that you should track, you should ask yourself what result you expect from a programs outputs. Outcomes (intermediate and end outcomes) Outcomes do not indicate the quantity of service provided, but the results and accomplishments of those services. Outcomes provide information on events, occurrences, conditions, or changes in attitudes and behavior (intermediate outcomes) that indicate progress toward achievement of the objectives of the program (end outcomes). Outcomes happen to groups of customers (e.g., students or elderly persons) or to other organizations (e.g., individual schools and/or businesses) who are affected by the program or whose satisfaction the government wishes to attain. Efficiency and Productivity These categories relate the amount of input to the amount of output (or outcome). Traditionally, the ratio of the amount of input to the amount of output (or outcome) is labeled "efficiency." The inverse, which is the ratio of the amount of output (or outcome) to the amount of input, is labeled "productivity." These are equivalent numbers. Exhibit 2-7. Examples of Performance Indicators Input Number of positions required for a program 26
  • Cost Supplies used Equipment needed Output Number of classes held Number of projects completed Number of people served Number of letters answered Number of applications processed Number of inspections made Outcome Crime rate Employment rate Incidence of disease Average student test scores Percent of youth graduating from high school Number of successful rehabilitations Number of traffic accidents Efficiency Cost per kilometer of road repaired (output based) Cost per million gallons of drinking water delivered to customers (output based) Cost per number of school buildings that were improved from "poor" to "good" condition (outcome based)Exhibit 2-8 contrasts output and outcome indicators for specific services or activities. Exhibit 2-8. Contrast Between Output and Outcome Indicators Output Indicators Outcome Indicators 1. Number of clients served. 1. Clients whose situation improved. 2. Lane kilometers of road repaired. 2. Percentage of lane kilometers in good condition. 3. Number of training programs held. 3. Number of trainees who were helped by the program. 4. Number of crimes investigated. 4. Conviction rates of serious crimes, and crime rate. 5. Number of calls answered. 5. Number of calls that led to an adequate response.As a summary of selecting performance indicators, Exhibit 2-9 provides an example of Objectives,Outcomes, and Indicators for a road maintenance program. The example also provides targets forimproving performance. Targets will be addressed later in this manual, in Step 5, Data Analysis. Exhibit 2-9. Example of an Objective, Outcomes, Indicators, and Targets Road Maintenance ProgramObjective:Provide safe, rideable roads to the citizens, by regular renovation and maintenance of existing roads andby upgrading of any unpaved roads in the municipality. 27
  • Outcomes:(1) Maintain municipalitys road surface in good, better, or excellent condition.(2) Reduce traffic injuries or deaths by improving the condition and clarity of road signs.Indicators for Outcome (1):Input: cost of paving the road, personnel, equipment; amount of equipment used.Output: kilometers of road paved; number of households having paved roads.Outcome: kilometers of road surface in good or excellent condition; percent of citizens satisfied with roadconditions.Efficiency: cost per kilometer of road paved; cost per kilometer of road in excellent condition.Indicators for Outcome (2):Input: cost of new road signs, personnel costs.Output: number of road signs improved; number of new road signs installed.Outcome: traffic injuries or deaths; road signs in good or excellent condition.Efficiency: cost per new or improved road signs.Target for Outcome (1):Ensure that 90 percent of the road surface is in good or excellent condition.Target for Outcome (2):Reduce traffic injuries or deaths during the year by 10% through improved road condition and clarity ofroad signs.An important element of selecting performance indicators is to define each indicator thoroughly so thatmeasurements will be made in a consistent way by different personnel and over time. For example in the abovemeasurement of roads in various conditions, The municipality agency needs to define how todetermine whether a meter of road is in excellent, good, fair, or poor condition.SELECT OUTCOME INDICATOR BREAKOUTS (DISAGGREGATION) OF EACH OUTCOME INDICATORBY KEY CHARACTERISTICSYour municipality and your agencies will find the outcome information considerably more useful formaking improvement if you breakout the outcome data by key customer and service characteristics. This will muchbetter enable users of the data to identify more precisely where problems, and successfulpractices, are present. Consider breaking our the outcome data into categories such as the following:• By geographical location;• By organizational unit/project;• By customer characteristics;• By degree of difficulty (in carrying out the task in question); and• By type of process or procedure you use to deliver the service.Each of these recommendations for indicator breakouts is discussed below.By geographical location Break out data by district, neighborhood, etc. The presentation of data by geographical area gives users information about where service outcomes are doing well and where they are not. 28
  • Exhibit 2-10 shows the percentage of respondents who rated the cleanliness of their neighborhood in Püspökladány, Hungary, as very clean and somewhat clean. Overall (for the entire city), 45 percent of respondents stated their neighborhood was very clean or somewhat clean. However, when you break up responses geographically (by districts) you begin to see interesting variation. While most of the districts got a similar rating on neighborhood cleanliness, only 26 percent of respondents in district 1 rated their neighborhood as very clean or somewhat clean. This shows that district 1 is a problem area, and the city needs to examine why residents in that district rated cleanliness so low. (Note: The seven districts in the city were categorized based on socioeconomic conditions. Respondents were asked, "How would you rate the cleanliness of the neighborhood you reside in from 1 to 5, where 1 is very dirty, and 5 is very clean?").By organizational unit/project Separate outcome information on individual supervisory units is much more useful than information on several projects lumped together. For example, it is useful to have separate performance information on each public works departmental unit, not only for all the units together. Another useful application of breakouts by organizational unit would be to have separate performance information on the different units of the police department. For example, response times could be examined for individual units that specialize in particular crimes or other emergencies.By customer characteristics Breakouts by categories of customers (e.g., age, gender, education) can be very useful in highlighting categories of customer services that are or are not achieving desired outcomes. For example, if the government finds that the daytime hours of operation for reporting a problem with city services are too limited, the government may consider opening a hotline in the evenings for citizens to contact them who otherwise are not able to call during the day. For another example, park staff may find that they have put too much effort into satisfying parents with children and that their parks are lacking facilities that the elderly can enjoy.By degree of difficulty All programs have tasks that vary in difficulty. A more difficult program will have a harder time achieving the results you desire, and therefore distinguishing the degree of difficulty of a program will drastically change your perception of its outcomes. To show good performance an organization is sometimes tempted to attract easier- to-help customers, while discouraging service to more difficult (and more expensive) customers. Reporting breakouts by difficulty will eliminate this temptation. Exhibit 2-11 gives an example of considering the difficulty factor in presenting performance information.By type of process or procedure you use to deliver the service Presenting performance information by the type and magnitude of activities or projects being supported by the program is very useful for you. For example, a street cleaning program can comprise sweepers, garbage cans and dumpsters, and garbage trucks. You should present data on each project in the program by (1) the type and amount of each activity; and (2) the indicators resulting from each projects efforts. 29
  • Exhibit 2-10.Geographic Location Breakout 30
  • Exhibit 2-11. Workload (Client) Difficulty Breakout Unit No. 1 Unit No. 2 Total Clients 500 500 Number Helped 300 235 Percent Helped 60% 47% Difficult Cases 100 300 Number Helped 0 75 Percent Helped 0% 25% Non-Difficult Cases 400 200 Number Helped 300 160 Percent Helped 75% 80%Note: If you only looked at aggregate outcomes for Units 1 and 2 together, you would unfairly evaluate Unit 2, which hada higher proportion of difficult cases.You can use breakouts for purposes such as the following: • To help pinpoint where problems exist as a first step toward identifying corrective action; • As a starting point for identifying "best practices" that might be disseminated to other program areas, by identifying where especially good outcomes have been occurring; and • As a way to assess the equity with which services have been serving specific population groupsA summary checklist of these breakout categories is given in Exhibit 2-12. Exhibit 2-12. Possible Client and Service Characteristic BreakoutsService CharacteristicsGender Examine outcomes for men and women separately.Age Examine outcomes for different age ranges. Depending on the program, the age groups might span a large range of ages (such as examining clients under 21, between 21 and 59, and 60 and older), or the program might focus on a much smaller age range (such as youth programs wanting to compare outcomes for youth under 12, 13-14, 15-16, and 17 or older)Race/Ethnicity Examine outcomes for clients based on race/ethnicity.Disability Examine outcomes based on client disability. For example, some programs might want to determine whether clients with disabilities rate services differently than those without disabilities, as well as the outcomes for clients with various types of disabilities.Educational level Examine outcomes for each client based on the educational level achieved before 31
  • starting service.Income Examine outcomes for clients grouped into specific income ranges based on the latest annual household income at the time clients began service.Household Examine outcomes for households of various sizes, generations, and number of children.Difficulty of Examine outcomes by incoming statute based on expected difficulty in being able toproblem at intake help the client. Inevitably, some clients are more difficult t help than others. For example, an employment program might want to consider the literacy level of its new clients. An adoption program might want to relate outcomes to the age and health of the children.Service CharacteristicsFacility/Office Examine outcomes for individual facilities or offices.Service provider Examine outcomes for clients of individual service providers, such as caseworkers.Type of Examine outcomes for clients who were served using each distinct procedure. Forprocedure example, a youth program might have used workshops, field trips, classes, etc.Amount of Examine outcomes for clients who received varying amounts of service. This mightservice be expressed as number of sessions a client attended, the number of hours of service provided each client, or whatever level of service measurement the program uses.Source: Analyzing Outcome Information: Getting the Most from Data, The Urban Institute, 2004. 32
  • Step 3. SELECT DATA COLLECTION SOURCES AND PROCEDURESIDENTIFY A COLLECTION METHODA performance indicator is not very useful until a feasible data collection method has been identified. ForMDG indicators, a United Nations publication (2003) provides general suggestions for data collection and sources. (SeeAppendix B for a list of the MDG indicators likely to be directly applicable to municipalities, and theirdata sources.) However, your municipality will need to work out the datacollection procedure details. For the MDG indicator used as an example in Step 2, "Proportion of the population with access to an improved water source," the UN report notes that the usual sources have been administrative records on facilities and surveys of households. (It states that the "evidence suggests that data from surveys are more reliable than administrative records and…provide information on facilities actually used by the population.) We note that another possible data collection procedure is the use of trained observer rating procedures to help determine what is available to households.Data for most of the MDG indicators are obtained from national census and survey (usually conductedevery two to five years) or agency records from national line ministries. In some cases data are alsocomputed directly by the countries National Statistical Offices, the World Bank or UNESCO Institute for Statistics. Inaddition to these national surveys or agency records you will need to track disaggregatedvalues for the indicators for your municipality.There are four primary sources of performance data: • Agency records • Surveys of citizens • Ratings by trained observers • Use of special measuring equipmentIn this guide we discuss the first three in some detail below.Several factors will affect your decisions of which sources to use for which indicators: • How applicable is it to the information you seek? (For instance, outcome information such as citizen satisfaction or ratings of service quality can only be obtained from surveys of citizens.) • What is the availability of sources from which you can obtain the information? • How much time and resources would it take to regularly collect the data? • What is the likelihood that reasonably accurate data can be obtained from the procedure? 33
  • USE AGENCY RECORDSExamples of performance data obtainable from agency records (sometimes call "administrative records")include the following (some of these records will be available locally, others from the nationalgovernment): • Incidence of illnesses and deaths in a hospital Tracking Citizen Calls and (end outcome indicator) Response Times in Indjija, Serbia • Results of test scores in schools (end outcome The key feature of Indjijas Sistem48 is a call center indicator) for citizens to make complaints, comments, or • Total percent of owed fees collected requests concerning any local government service. (intermediate outcome indicator) After the call is received, several things take place: • Number of complaints received (intermediate • Callers are guaranteed a response within 48 hours outcome indicator) • The complaint or request is forwarded to the • Percent of time equipment is operational (such as service in question for resolution equipment for street cleaning or public transit vehicles • Data about the call is logged and reported, (internal intermediate outcome including: indicator) — Time of call • Response time to respond to citizen requests for — Content Length tion a service, such as to determine eligibility for a Data on — calls areof time until resoluMayor in bi- the reviewed by the public welfare benefit, to obtain a business weekly meetings with the departments. Receiving permit, to receive emergency medical attention, and recordfing citizen calls provides anasimponrtangt measure o citizen satisfaction, as well poi tin etc. (intermediate outcome indicator) to specific areas of particular concern. • Cost per kilomet er of road maint ained (efficiency indicator) • Size of workload, for example number of buildings needing inspection or number of kilometers of street that need to be repaired (used for calculating outcome and efficiency indicator values) Why is it useful to use agency records?The advantages of using agency records as data sources are their availability, low cost, and programpersonnels familiarity with the procedures. Since agency record data are already collected and available, this has been themajor source of performance data used by local governments. This information can,thus, form the starting point for your performance measurement system.For some performance indicators, an agency might need to obtain information from another municipalagency or even from another level of government. For example, one of your public welfare agenciesmight need record data from the health department to process an application for disability benefits. Can you use existing processes?For performance measurement purposes, however, your agencies are likely to need to modify theirexisting processes. For example, you may have to modify the forms and procedures to enable you tocalculate service response times. This involves:
  • • Recording the time of receipt of a request for service; 34
  • • Defining when "completion" of the response has occurred; • Recording the time of completion of the response; • Establishing data processing procedures to calculate and record the time between these two events; and • Establishing data processing procedures for aggregating the data on individual requests. Are there drawbacks to using agency records?Agency records have a major limitation. Records alone seldom provide sufficient information on majoraspects of program quality and outcomes. SURVEY CITIZENS (INCLUDING BUSINESSES)Citizen surveys are a very important procedure for obtaining many key elements of the outcomes andquality of many, probably most, of your municipalitys services. They may be the only way to obtaincertain information for some if not many, of your outcome indicators. Why is it useful to use customer surveys?The advantages of customer surveys are that they provide information not available from other sourcesand they obtain information directly from program customers. The disadvantages of customer surveys are that they areunfamiliar to agency personnel and require special expertise or training; they can be costly;and they are based on respondents perceptions and memory and are therefore subjective.Examples of the types of information you can obtain from customer surveys include: • Ratings of overall satisfaction with a service and of the results achieved • Ratings of specific service quality characteristics • Data on actual customer experiences and results of those experiences • Data on customer actions/behavior sought, or attempted to reduce, by the programs service • Extent of service use • Extent of awareness of services • Reasons for dissatisfaction or non-use of services • Suggestions for improving the service • Demographic information about customersExhibit 3-1 illustrates the service outcome data that a survey of citizens can provide. 35
  • Exhibit 3-1. Example of Water Service Indicators Derived from a Customer SurveyIndicator %Percent who receive drinking water most often from a private connection to the citypipeline 49.3Of those who need water from a different source sometimes, percent who get it from ariver, lake, pond, stream or other surface water 24.2Percent who have access to water no more than once every three days on average duringthe past 12 months 13.1Percent who have access to water four hours or less per day on average during the past 12months 25.4Percent who report that they always or usually have sufficient water when they needed it 87.1Percent reporting that sometime in the past 12 months the water had bad odor Percent reporting 8.6that sometime in the past 12 months the water had bad taste 16.8Percent reporting that sometime in the past 12 months the water had a different appearance 34.4Percent reporting that the pressure or flow of water was not enough in the past 12 months 19.7Percent who report they pay for water service 97.4Of those who dont pay at present, percent who say they would be willing to pay a fee toreceive a better water supply 33.3Percent who pay at present, who say they would be willing to pay a higher fee if they wereto receive better service 56.7Source: Data from Kombolcha, Ethiopia City Survey, 2005. In "Using Performance Management to StrengthenLocal Services: A Manual for Local Governments in Ethiopia. Katharine Mark. July 2006. Washington, D.C. 36
  • Unlike opinion polls, surveys of citizens focus on respondents past actual experience with services, nottheir opinions about the future. (However, you can include in any of these surveys a few questions to solicit citizenopinions on issues of particular, importance. This will make these surveys of even greateruse to the municipality.)Surveys are especially useful if they are taken Using Citizen Report Cards inperiodically, so that the local government can see what Indiahas improved (or weakened) over time. In 1993-94 the Public Affairs Center inThese surveys come in two major forms: Bangalore, concerned about the deteriorating quality of public services, developed and implemented a• Surveys of samples of households (or businesses) in citizen satisfaction survey that measured user perception on the quality, efficiency and the municipality - commonly called a "household" survey. adequacy of basic services extended by 12 Such a survey can be used to provide municipal agencies. The results of the survey were feedback simultaneously on multiple services. translated into a quantitative measure of citizen satisfaction and presented in various media in the form• Surveys of those citizens (or businesses) that have of a report card. actually used the particular service - a "user" survey. The 1994 survey was followed up in 1999. • Eight of the 12 agencies covered in theUser surveys are likely to be most useful to your 1994 report card made attempts to respond toindividual agencies and programs since they can obtain more public dissatisfaction. The worst rated agency⎯the Bangalore Developmentdetailed information on the particular service. In household Authority⎯reviewed its internal systemssurveys covering multiple services, normally for service delivery, trained junior staff and beganonly a few questions can be asked about each service. to co-host a forum for NGOs and public agencies to consult on solving highSurvey results can also be very effective in informing the priority civic problems such as wastepublic about city performance. They can, for example, be used management.in "citizen report cards" - as illustrated in the box - to • The report cards were also successful inpublicize city performance in clear understandable generating political momentum forterms in order to galvanize the reforms. Popular local and regional media carried regular stories on the report cardgovernment to improve services. findings. Citizens have also been inspired to take initiative towards improving services and have subsequently engaged in the state-citizen Swabhimana partnership in Bangalore⎯a program to arrive at sustainable solutions to longstanding city- level problems. The Chief Minister of Karnataka has also established a Bangalore Agenda Task Force of prominent citizens to make recommendations for the improvement of basic services. • Report cards have also been used as an effect ive tool in Ukraine and the Philippines.CARRY OUT A CUSTOMER SURVEYYou may need to contract for these surveys, especially household surveys. In this case, the contractor pre-tests the final questionnaire, conducts the survey, tabulates the results, and reports them to the municipality.However, it is the responsibility of the municipality and its agencies to make sure that the 37
  • questionnaire and survey process obtains the information it needs and that the survey is done in a validmanner.Task 1: Prepare a draft questionnaire Decide what service outcomes are best obtained from citizens. Decide what descriptive information is desirable from respondents that would help you interpret the outcome data. For example, would information from the respondent on reasons for not liking the service, or information on how much service the respondent received from the program, be helpful in later interpreting the outcome information? Obtain professional help to prepare the question wording so that the wording is clear and unbiased. A good way to jump start the process of drafting a questionnaire is to use an existing questionnaire used by another government as a starting point. Examples will often be available from other governments.Task 2: Identify the major population subgroups for Customer Survey in Georgiawhich the local government wants toobtain data This decision should be guided in part by the resources available to conduct the survey. Try to obtain A customer survey was conducted in seven responses from at least 100 respondents in cities in Georgia in 2001 followed by two any category. For example, if the survey seeks to additional surveys in 2002 and 2004. The compare four different geographic areas in the survey included questions like: municipality, the number of completed interviews needs to be at least 400. If more precise About how many hours during each day do data is needed, this means a larger sample size. you have access to the water supply during (Determination of what size samples are needed to the winter? obtain various degrees of precision in the survey Would you say the city is generally very findings will need to be done by a statistician.) If, clean, fairly clean, average in cleanliness, however, the requisite sample size is not affordable, it Speaking of everyday household garbage (as is better to go ahead with what is affordable so that at opposed to bulk refuse) in the past month least approximate results can be obtained. It is better to (meaning, in the past thirty days) did the garbage be roughly collectors ever miss picking up right than precisely ignorant! your garbage? How satisfied are you with the servicesTask 3: Determine the mode of administration and provided you by the local government?who will be responsible for each survey task Several factors need to be considered, especially Survey results were used by cities to identify cost and the likely rate of response. In many priority areas for service improvement. (A developing countries these surveys will best be sample multi-service survey instrument is administered using in-person interviews. attached in Appendix A.) Interviewing time is inexpensive, and alternative approaches such as telephone interviews or Source: USAID Local Government Reform Initiative mailed questionnaires may not be feasible. In (LGRI) in Georgia. more developed countries, telephone interviews and mailed questionnaire will likely be less expensive than in-person interviews. Careful record keeping is important while the survey is administered so that the findings can be compared to future administrations of the same survey. The frequency at which the survey is administered should be determined in advance.Task 4: If the survey is to be contracted out, hold a competition, select the contractor and undertakethesurvey There are several reasons to have the survey carried out by a contractor, such as greater objectivity or the lack of in- house resources. If a contractor is needed, you will need to prepare a scope of work 38
  • and then select the contractor. A competitive process will help you obtain the best possible contractor for the job. If the survey is to be done in-house, carry out the implementation steps established in point 3 above. For "user surveys," the government may be able to administer the surveys themselves, especially if potential survey respondents can be expected to come to the service facility at which time the respondent can complete the questionnaire. This last procedure, however, would need to done in a way that protects the confidentiality of each respondent so the respondent can answer the questions frankly.Task 5: Provide for analysis and reporting of the data After the survey has been completed, the responses need to be tabulated and the findings summarized for the relevant stakeholders. (This is discussed further in Steps 4 and 5.) Exhibit 3-2 is an example from a 2004 survey in Georgia. It depicts citizens rating of city cleanliness in each of seven cities, comparing results in 2002 and 2004. Exhibit 3-2 Percent of Citizens Rating the Entire City as "Fairly Dirty" or "Dirty" 0.50 0.50 0.48 0.47 0.44 0.45 0.45 0.40 0.35 0.32 0.30 0.24 0.23 0.20 0.13 0.10 s tap n Po g arej s k et 0.10 0.05 hi b u g ti u re li i ha g o k o ti Tki de o 0.02 Oz Sa Mt Ze La 0.00 2002 2004Source: "Georgia Customer Survey 2004." By Ritu Nayyar-Stone and Lori Bishop. June 2004. Washington, D.C.: The UrbanInstitute, Project No. 06901-012.Appendix C of this Guide provides an example of a multi-service customer questionnaire to assesscitizens satisfaction with a number of services that might be provided by a local government. The questionnairecan serve as a starting point for any municipality. However, it should, of course, be modified based on thespecific conditions and services in your city. Substantial modifications will berequired to adapt this questionnaire for a small town or agglomeration of villages.
  • 39
  • Appendix D is an example of a user questionnaire for water service delivery and patients satisfactionwith the quality of services in a hospital. It asks service users to provide comments and suggestions forimprovement, and also asks about end outcomes.Other options a government has include sampling a smaller number, asking fewer questions, or using mailinstead of personal interviews. Note that this example includes a few open-ended questions. Open-ended questions canobtain richer information about the opinions of citizens, but the responses take considerably more time to process andanalyze, and it is difficult to know how representative the responses are unless ahigh proportion of the respondents give answers that are quite similar. Relative advantages of user and household surveys User surveys can be especially useful to municipal agencies because they can provide more in-depth information on a particular service from citizens who have used the service. User surveys are also likely to be easier to administer than household surveys, because contact information is likely to be already available on users, and users are more likely to respond because they are interested in the service. Household surveys have their advantages. They can obtain information about several services at once, survey costs can be shared among those agencies, and they can obtain information from non-users. Information obtained from non-users is helpful in estimating rates of participation among different types of households. Also, non-users can indicate why they do not use a service, and improvements can then be made as appropriate.USE SYSTEMATICALLY COLLECTED TRAINED OBSERVER RATINGSThis data collection procedure can be highly useful for assessing any service outcome that can bemeasured by direct physical observation, especially by visual means. Different observers rate, in a systematicmanner, physically observable conditions using a pre-selected rating scale. The rating scales need to be clear andspecific so that different raters, would give approximately the same ratings to the observed condition. This also meansthat changes in condition over time can be reliably detected bycomparing the later findings made by either the same or different trained raters.This can be a highly accurate, reliable, and useful procedure if you have a clearly defined rating system,adequate training of the observers, adequate supervision of the rating process, and a procedure forperiodically checking the quality of the ratings. Examples of applications include:— Cleanliness of streets, alleys, and recreation areas;— Condition of trash receptacles in public areas;— Presence of offensive odors from solid waste;— Condition of roads (potholes, sidewalks, paved area, etc.);— Condition and visibility of street signs;— Condition of public facilities, such as school buildings and health clinics;— Condition of safety equipment in buildings (fire extinguisher, hose, sprinklers); and— Cleanliness of public baths.— Conditions in waiting rooms;— Waiting times;— Ability of developmental disability citizens to undertake basic activities of basic living. 40
  • What types of rating systems can you use? Trained observers can use three major types of rating systems: • Written descriptions • Photographs • Other visual scales such as drawings or videosWritten Descriptions This is the simplest and most familiar type of rating system. It depends on specific written descriptions of each grade used in the rating scale. Exhibit 3-3 is a written set of scores for street cleanliness. Visual rating of cleanliness can be made from a car, or by observers on foot.Photographic Rating Scales Photographic scales can be more precise than written scales in providing clear definitions of each ratings grade, and make ratings easier to explain. Photos are used to represent each of the grades on the rating scale. Observers are trained to use that set of photos to determine the appropriate grade. An example of a photograph rating scale, based on photographs taken in a city in Armenia is shown in Exhibit 3-5.Other Visual Scales Visual rating scales can also use drawings or sketches that represent each grade on a rating scale. An example of this is sketches representing conditions of school buildings, or classroom walls. This kind of rating scale was used by the New York City school system to track the physical condition of its schools and to help make decisions about building repairs.Assessing the need for repairs (determining needed action) is an additional, very important use foroutcome information from observer ratings. The city of Toronto used the information obtained from the scale in Exhibit3-4 below not only to help track road conditions but also to determine what repairs wereneeded in each location. Exhibit 3-3. Rating Cleanliness of Streets, Squares, and SidewalksRating Scale Description 1 Streets Almost or Completely Clean: Two pieces of litter are allowed 2 Streets Generally Clean: Some litter observed in the form of items thrown here and there; or a separate pile that has not been thrown in the container with a volume equal to or smaller than a shopping bag 3 Dirty Street: Garbage scattered here and there along the street or a big pile, but not sufficient to be considered garbage collection area; or in a generally clean block with a single pile bigger than a shopping bag but smaller than 120 liters standard size containers (that was not thrown out for pick up by the cleaning team (1m3) 4 Very Dirty Streets: Piles of garbage or lots of litter scattered everywhere or almost throughout the block; or in a block a pile with a volume much bigger than a 120 liter standard garbage container. 41
  • Source: Adapted from Kavaja Municipality, Albania. City Cleanliness Rating by Trained Observer Ratings Approach.November 2006. Exhibit 3-4 Road Condition Rating Scale Rating Condition Description Comments 9 Excellent No fault whatsoever 8 Good No damage, normal Recently constructed work wear, and small cracks 7 Fair Slight damage, crack Average rating for City of Toronto pavements and fill or minor leveling sidewalks required 6 Repair 10% of complete Pavement requires preventive overlay. replacement cost 5 Repair 25% of complete replacement cost Eligible for reconstruction programme. 4 Repair 50% of complete replacement cost 3 Repair 75% of complete Total reconstruction probably indicated replacement cost 2 Repair More than 75% of complete replacement cost Requires complete reconstruction 1 Impossible to repair Source: Adapted from Performance Measurement: Getting Results, 2nd edition, Washington, DC: The Urban Institute, p. 104. How can you establish a trained observer rating system?The basic steps to establish a trained observer rating system are as follows: 42
  • Task 1: Determine what conditions to rate and where the ratings will be made. Your municipality first needs to choose what conditions need to be rated. You will also need to decide which areas of the city to cover - whether to try to cover all parts of the whole city, all streets, all facilities, or concentrate on specific neighborhoods, a sample of blocks, or only some facilities. Starting with just one area can develop skills, and may also be motivating if it shows positive results. A final decision you will have to make is how often to carry out the ratings. This depends on a number of factors, particularly what is being rated (such as how frequently are observable conditions likely to occur) and the cost of the ratings. For example, building ratings might be done only once a year, while ratings of liter conditions might be done considerably more frequently, such as monthly or every two weeks.Task 2: Develop a rating scale with explicit definitions for the grades of each condition to be measured. After the determinations of the scope of the effort as described above, you will need to decide specifically what to rate and to create measurable rating scales for each condition - such as the rating scales in Exhibits 3-3 and 3-4. If you use a photographic rating scale, you will need to take additional steps: • Take a large number of photographs in settings representative of the full range of conditions expected to be present. (One rural commune started with a group of about fifty photos. Care should be given to have a set of photographs that encompasses the full range and multiple variations of each condition. (In Exhibit 3-5 below you will see some photographs taken of "street cleanliness" conditions in Yerevan, Armenia that might be used to establish a photographic rating system.) • Select a panel of persons to act as judges with varied backgrounds, persons who will not be part of the performance measurement activities. Select labels, each representing a condition that the program expects to find (for example, smooth, somewhat smooth, bumpy, very bumpy), and ask the judges to sort the photographs into groups that represent each condition. • For each condition, select four or five photographs that the largest number of judges identified as representative. These sets of photographs then become the rating scale. • Develop written guidelines to accompany the photographs. • Package the guidelines and copies of the photograph selected for the final scale in a kit for each observer.Exhibit 3-5 shows examples of street cleanliness conditions. Visual ratings by trained observers can bebased on a scale described both photographically and in writing. This reduces the subjectivity of the ratings so thatdifferent observers using the rating guidelines would give the same rating to similar streetconditions. The exhibit shows photographs representing four levels of rating: Condition 1: Very clean. No noticeable littering or one or two scattered items. Condition 2: Clean. A few littered items in a relatively contained area. No large items. Condition 3: Dirty. Many littered items covering a fairly large area. Condition 4: Very dirty. A large number of littered items, one or more large piles of trash. 43
  • Exhibit 3-5. Sample Rating Scale for Street Cleanliness Condition 1 Condition 2 Condition 3 Condition 4Task 3: Develop and document procedures for selecting inspection locations, recording data andprocessing data. You will need to be sure that every aspect of the process is decided ahead of time, by thinking through each step. You will need to decide: — How will the observations be recorded? - On a paper form? In a handheld computer? With a camera? — How will blocks/facilities be assigned to observers? How long will they work each day and how many blocks/facilities will they be expected to rate each day? How will they be transported to the blocks/facilities they are rating? Will they be paid, and if so, how much? — What provision should be made for raters to supplement their numerical ratings with comments that provide specific information as to the nature and extent of the problems they have observed? 44
  • — What do observers do with their data after rating? Record it in a central database? Will someone else process it, and who will that be? It is also important to think through the data analysis ahead of time to be sure data are collected in a form that will be compatible with the analysis methods and formats for reporting. See Step 5 for more information on analysis. Procedures for rating certain quality elements of municipal services (street cleanliness) using trained observer ratings are provided in Appendix D.Task 4: Select and train observers. Observers might be staff members, part-time employees, students, community members, or other volunteers. (See box for some of the advantages of using volunteers.) Training sessions need to be designed carefully so that observers will receive complete and consistent training, but the training itself need not be very long. Depending on the complexity of the conditions to be rated and the experience of the raters and trainers, even one or two days might be enough. Using Volunteers as Trained Observers There are many examples of successful rating systems that rely on volunteer raters. Some examples include neighborhood residents, pensioners, representatives of NGOs, and youth groups or high school/university students. In addition to cost factors, use of volunteers has several advantages. Citizens bring a fresh eye to their assessments. They will notice aspects that might be missed by others, and are likely to think more broadly, coming up with ideas about the causes of the problem and possible solutions. In addition, with citizens as raters, the process is likely to be more trusted by the public as a genuine and objective assessment of public services. If the citizen volunteers add comments in their ratings about citizen behavior (for example, that littering should be reduced), those comments are more likely to be accepted. The observers first need to learn what each of the grading scales means and how the local government expects them to be interpreted. Then the trainees should work together as a team to test the process of rating blocks/facilities. The field training locations should be selected to ensure that they include a range of conditions similar to those the raters are expected to encounter. This group practice is important to develop in each observer the same understanding off the rating scales. If the program is just starting this might be a good time to review the rating scale and revise as might be necessary. For instance, if there are consistent divergences in ratings given, it might be an indication that the scale needs refinement or more elaboration.Task 5: Set up a procedure to check the quality of the ratings. It is inevitable that some variation in rating quality will occur over time. Moreover, even if the raters are all consistent, there will always be those who are doubtful about whether such ratings are reliable. For both those reasons it is important to have some systematic procedures to check the ratings of the trained observers. One relatively easy way to do this is by having an experienced rater, usually the rating supervisor, verify a random selection of about 10%-15% of the areas/facilities rated. Knowing that this will happen will also help raters work to maintain the consistency of their ratings. What are the advantages of trained observer ratings? Some advantages of trained observer ratings are: 45
  • • They provide reliable, reasonably accurate ratings of conditions that are otherwise difficult to measure;• If ratings are done several times a year, you can adjust allocation of program resources throughout the year;• You can present ratings in an easy-to-understand format to public officials and citizens; and• They are relatively inexpensive.A major advantage of trained observer ratings is that these ratings can be used to give real timeoperational feedback to service managers as to where service problems are present - in which streets, inwhich facilities. Your municipality can use those ratings, and any supplementary comments provided bythe raters as a basis for "work orders" to service personnel. This contrast with citizen surveys and even agencyrecord data, that do not lend themselves as much toidentifying specific service improvement needs.What about any disadvantages?Some disadvantages of trained observer ratings are:• They are a "labor intensive" method that requires time and training of observers;• You need to check ratings periodically to ensure that the observers are adhering to procedures;• Program personnel may not feel comfortable with the procedures for trained observer ratings because they do not use them often; and• They are only applicable to conditions that can be measured by physical observations. 46
  • Youth in Georgia As Trained Observer RatersIn Georgia, local governments worked with youth—high school students—to develop,conduct, and analyze trained observer ratings. Announcements were posted in schools asking youth toparticipate in the training. Interested students were selected based on previous volunteer experience andbasic familiarity with local government and trained by a US expert to do hands-on training on rating,including fieldwork exercises. Youth trained in the initial roundthen trained youth in other cities.In Ozurgeti, the Youth Group rated the cleanliness of the streets and the area around garbagebins several times and presented the results to the mayor and city council, followed by a joint presentation tothe public by the Youth Groups and city officials. In 2003, the audience for the presentation included localgovernment representatives, citizens, communal servicedepartments, and other interested groups. The presentations were covered by the media.In Gori, the volunteers not only rated the streets, but also Gori youth collected comments fromthe citizens during the rating process and incorporated these into their recommendations for theCommunal Service Department.In several cities, youth groups organized volunteer cleaning days to clean up the center of townor specific parks. In Zestaponi, for example, the Youth Group mobilized more than 1,000participants. 11 Youth Groups participated in the Global Youth Volunteer Day Cleaning Action,which was sponsored by 15 organizations, including UI, IRI, and IFES. More than 8,000 peopleparticipated in the 11 cities, with (in most cases) the active participation of localgovernments, businesses, and local NGOs.These experiences showed that youth groups represent an exciting mechanism for increasingtransparency and sustainability of reforms at the local level. The young people who participatedin this initiative were enthusiastic about involvement in activities to improve their communities andproved to have a great deal of influence on older members of their communities, as well as to be ableto successfully engage political leaders in discussions aboutlocal decision-making.Source: Georgia Local Government Reform Initiative, Final Report, January 2005. 47
  • DATA QUALITY CONTROLWhen performance data begins to be used to help make important decisions, such as budget decisions,users of the performance data get very concerned about the quality of the performance data and its credibility. Fromthe beginning, municipalities should consider ways to ensure reasonable quality of the performance measurementprocess. This is important for building accuracy into the design of themeasurement system and into the training of personnel.As agencies implement performance measurement systems, they need to ensure that the data aresufficiently complete, accurate, and consistent to support decision making. The data must be sufficiently free of bias andother significant errors that would affect conclusions about the extent to which theoutcomes sought have been achieved.A particular source of potential bias that needs to be guarded against is the incentive for employees togame the system by manipulating indicator values to make their performance look good. E X H I B I T 13-2Following are some sub-steps, your municipality, and each of your agencies, can take:Task 1: Assign responsibility for data quality Make it clear that each agency and each of the agencys programs is responsible for the quality of the data it submits. If possible, also provide some form of periodic review by an independent office (perhaps a management or audit office) of at least samples of the data collection procedures and data. Program Managers are the first line of defense against poor quality data. They should be accountable for data quality. However, the performance measurement system should also be subject to periodic assessment by other, more independent offices or organizations.Task 2: Require that each performance indicator is fully and clearly defined so that users will knowwhat isbeingreported Both the program collecting the data and those using the data need to be clear as to what is being measured and the time period covered by the data. A classic example is measuring response times. Indicators of response time are likely to be included in the performance indicators for many, if not most, programs. But response times can usually be defined in many ways. When should the clock be started -- when the request is first made to the agency, when the appropriate person in the agency has received the request, or what? When should the clock be stopped -- when a formal written response has been mailed to the customer; when some appropriate action has been started, or at some other time?Task 3: Require written procedures describing how the data are to be collected Documenting data collection procedures can be a cumbersome task. However, it clearly is good practice to write down the procedures. This will help ensure that data collected by different (perhaps new) staff will use the same procedures that other staff have used.Task 4: Train persons who collect or record the data to do it the same way, the way specified in thedocumentation for the data collection procedure A classic example occurs in police reporting what category of crime was committed. Police officers have discretion in labeling certain crimes depending on estimates of the amount of property stolen or other factors.Task 5: Make sure the material accompanying the performance data provides sufficient informationfor users to understand the data 48
  • Task 6: Identify the source of the data in the performance reports Sources can be presented in notes at the bottom of each table, or if extensive, in an appendix at the back of the report. This is good practice.Task 7: Identify limitations of the data The agency may still want to report the data but should make clear the limitations of the data. For example, when performance information is based on surveys whose findings are based on only a small number of responses, users of the data should be alerted to this limitation. When reporting the results of surveys of citizens, "confidence intervals" and response rates (how many of those persons the agency attempted to reach were not reached) should normally be identified.Task 8: Make certain that the time period covered by the data for each indicator is clearly identified Individual performance indicators may have differing time periods. For example, agencies that survey their customers may do so at various times of the years. The time period when surveys were administered should be identified. Are the data current or old? Users should be alerted to this. Substantial time lags can occur before the data for an indicator become available. However, agencies should attempt to speed the process so that timely data can be provided. (Internet web sites often provide data that may be older than desirable, because of the time it takes to post them or because the website is not kept up.) One way to reduce this problem is to encourage agencies to obtain and report preliminary data, identifying the data as preliminary and indicating when the final version will be forthcoming.Task 9: Avoid changing the performance indicators from one year to the next Some changes can be expected due to improvements in measurement - and is justified. Changes in indicators can be due to the addition of brand new ones, deletion, or ones in which the data collection procedure has changed so much that current measurements no longer can be compared to previous ones. However, too much change means that comparisons over time cannot be made. And, users will become suspicious that the changes are intended to assure that primarily indicators with favorable outcomes will be measured in a given year.Task 10: Protect your files, whether electronic or not, from tampering Accidental or intentional breaches of confidentiality or security need to be guarded against, especially with data that are likely to have major implications for the agency, program, or staff.Task 11: Train the staff responsible for the collection and processing of the data have been adequatelytrained Reviewers of a performance measurement system should assess the extent to which persons collecting, entering, or otherwise involved in processing the data, are doing so correctly. To the extent that the program has documented the data collection procedures, this will be easier to assess by comparing the documented procedures to those actually being. A typical concern is that staff regularly turns over. Training needs to be provided in proper data collection and recording to new staff. For trained observer ratings, (discussed above), the agency should periodically re-check samples of ratings to assess whether the observers have over time "telescoped" their ratings or other wise have begun deviating from the rating standards. For surveys of citizens (discussed above), if done in-house, the performance measurement system reviewers should check the procedures being used, including checking the work of the persons responsible for sample selection, for administering the survey, and for processing completed questionnaires when returned. If the survey is contracted out, you should before finalizing the contract at least check on the reputation of the survey firm.Task 12: Require that the agency providing performance data identify the particular office andprimary person responsible for the data 49
  • Some agencies have used an "outcome indicator specification" form that identifies who is responsible for the indicator. The form could identify the person responsible by name or only identify the responsible office. See Exhibit 3-6 for a sample of such a form. Use of a form such as this would provide a good summarized description of for an agency of its individual performance indicators. Exhibit 3-6. Outcome Indicator Specification Sheet Date: _______________ Program: ______________________ 1. Outcome 2. Outcome indicator 3. Category of indicator (e.g., intermediate or end outcome) 4. Data source and collection procedures 5. Breakouts of the outcome indicator that are needed 6. Frequency of collection 7. Who is responsible for data collection and its quality Source: Performance Measurement: Getting Results. Washington, DC: The Urban Institute Press, 2006.Task 13: Establish a formal municipality-wide policy clearly identifying the importance of data qualityand identifying the respective responsibilities of managers and staff This includes responsibility for: • Training personnel in data collection; • Implementing procedures for double-checking data entries; • Examining the reasonableness of the data⎯are they in an appropriate range? (whether the examination is done manually or automatically through computer programming); • Checking on data outliers; • Sampling a subset of the data to identify its accuracy; • Establishing a process for periodically checking data quality. Agencies should establish some procedure for periodically checking completeness and accuracy. For trained observer ratings, a supervisor should check a sample of ratings to assess whether they are reasonably complete and accurate. For citizen surveys, some supervisor of the organization administering the survey should check to see that interviewers, data entry persons, and computer programmers have been thorough and accurate in their work. For agency record data, entries being made by human beings should be periodically sampled to assess their completeness and accuracy. Computer checks can often be included that check for certain types of mistakes such as out-of-the-range numbers, unusual patterns in the recorded data (such as has been used with school test score data to help detect cheating), and missing data.No performance measurement system is—or ever will be—perfect. The most important question iswhether the performance data are sufficiently complete, accurate, and consistent to document performanceand support decision making at various organizational levels. If the answer to this ispositive, the system can be considered adequate on technical quality. 50
  • THE COST OF PERFORMANCE MEASUREMENTThe cost of performance measurement is always a significant issue. Agencies must balance the cost ofperformance data against the value added by the information obtained from the process. Finding theappropriate balance is the key.The largest single cost of a performance measurement system usually occurs during the start-up period.Costs then include: management and staff time used in designing and testing the measurement systems. Once the systemis operating, annual costs include: staff time required to collect and analyze the performance data; the cost of datacollection and analysis; the costs of any performance measurement related contracts (such as the cost of surveysconducted for the municipality by contractors) and contractoversight; and the costs of checking data quality.In assessing performance measurement systems, therefore, the ultimate question is whether theperformance data is sufficiently useful to justify the cost of the performance measurement system. Does the informationsignificantly help the municipality improve its services to the public so as to improve the outcomes of those services?And is the increased accountability achieved worth the costs of performancemeasurement?The use of performance information is described in Step 7. 51
  • Step 4. ANALYZE PERFORMANCE DATA THE IMPORTANCE OF ANALYZING PERFORMANCE DATAAfter an agency has collected all these data, it needs to examine and analyze the data to identifyappropriate actions that may be needed. Analyzing the performance data after the data have been collected is a vital part of any outcome measurement system. This often has been done in an overly casual way. The suggestions in this chapter are aimed at helping transform this key outcome measurement step into a more systematic, and considerably more useful step in providing information for making service improvements.This chapter suggests ways in which agencies can analyze performance data to help make serviceimprovements. This section does not, however, discuss the subsequent analysis needed to relate the performanceinformation to cost data, also needed for planning and budgeting. These are discussed laterunder Step 7. HOW PERFORMANCE DATA CAN HELPAnalysis of data from a well-conceived performance measurement system can help an agency: Identify the conditions under which a program is doing well or poorly and thus stimulate remedial actions Raise key questions regarding a service that can help staff develop and carry out improvement strategies Provide clues to problems and sometimes to what can be done to improve future outcomes Help assess the extent to which remedial actions have succeededThe focus in this step is on ways municipalities can examine information to help agencies determine whatchanges and steps toward improvement, if any, should be taken. The focus is on basic tasks that all agencies cantake, not on the use of more sophisticated approaches such as extensive statistical analysesor in-depth impact evaluations.Exhibit 4-1 lists a number of tasks for analyzing program outcome data. Each of these tasks is discussedbelow. 52
  • Exhibit 4-1. Basic Tasks for Analyzing Program Outcome DataPreliminary TaskTask 1. Tabulate the data for each performance indicatorExamine the Aggregate Outcome DataTask 2. Compare the latest overall outcomes to outcomes from previous time periodsTask 3. Compare the latest overall outcomes to pre-established aggregate targetsTask 4. Compare the programs outcomes to those of similar programs—and to any outside benchmarks, such as to performance levels achieved by other local governmentsExamine "Breakout"DataTask 5. Breakout and compare outcomes by various categories of the workload, especially important characteristics of service customers, such as in which city district they live, their age group, and/or their income group)Task 6. Break out and compare outcomes by service characteristics, such as the type and amount of service the customer receivedTask 7. Compare the latest outcomes for each breakout group with outcomes from previous reporting periods and to targetsExamine Findings Across IndicatorsTask 8. Examine consistency and interrelationships among inputs, outputs, and outcomesTask 9. Examine the outcome indicators together to obtain a more comprehensive perspective on performanceMake Sense of the NumbersTask 10. Identify and highlight key findingsTask 11. Seek explanations for unexpected findingsTask 12. Provide Recommendations to Officials for Future Actions, including Experimentation With New Service Delivery Approaches 53
  • DO SOME PRELIMINARY WORKTask 1. Tabulate the data for each performance indicatorCompute the values for each performance indicator for the reporting period. The analysts need to decide for eachindicator the numeric form in which the results for the indicator will be presented. For mostperformance indicators this means deciding whether to express the performance indicator as a number, asa percentage, or both. (For example, should infant mortality be presented as the "number of injuries," asthe "rate of infant deaths per 1,000 births," or both?) EXAMINE THE AGGREGATE OUTCOME DATATask 2. Compare the latest overall outcomes to outcomes from previous time periodsExamine changes over time. After data becomes available for more than one reporting period for a performanceindicator, the latest findings can be compared with findings for prior reporting periods to detect trends and othersignificant changes. If the data indicate substantial worsening or improvement, the agency should attempt to identify whythis occurred. The following are questions that might be asked tohelp identify reasons for changes: • Have external factors significantly affected outcomes? • Have special events significantly affected outcomes during the reporting period? • Have resources been reduced (increased) to a degree that affected outcomes? • Have legislative (central or local government) requirements changed in ways that affected the ability of the program to produce outcomes? • Have program staff changed their procedures in a way that affected outcomes?For example, if the number of housing fires and loss of life and property have been increasing in recentyears, the relevant agency needs to determine the causes and the extent to which they can be prevented,such as through housing inspections or building codes or by educating smokers about cigarette disposal.Reminder: When comparing reporting periods of less than one year, seasonal factors can be present andcan affect outcomes, such as the condition of roads and the rate of unemployment. In such cases, the program shouldcompare performance data for a given season with data for the same season in previousyears.This task is particularly helpful when an agency wants to assess the success of a change it has made in aservice delivery procedure. In such cases, the agency can compare performance values that occurredbefore the new procedure was introduced to values that occurred afterwards.Example: The municipality might have changed the application process for new businesses in order tospeed up the process. Comparing the average (or median) time before the changes to the time afterwardsindicates whether a significant improvement had occurred as expected. If not, the municipality wouldneed to consider other actions.Exhibit 4-2 shows that average response times for processing business loan requests after an automatedprocess was introduced declined from 52 to 46 days. Whether or not this improvement is sufficient tochange the processing process for other types of loans is a judgment for the agency to make. 54
  • Exhibit 4-2. Comparison of Outcomes Before and After Process Change 1997Quarters 1998Quarters 1999Quarters Average Average Before Before 1st 2nd 3rd 4th 1st 2nd 3rd 4t 1st 2nd 3rd 4th Change Change hAverage Response 53 51 56 49 53 53 49 47 43 45 44 N/A 52.4 46.8Time (Days) Introduction of Automated Process Source: Performance Measurement: Getting Results. Washington DC: The Urban Institute Press, 2006, p. 157. Caution: The evidence provided by such "before-and-after" values is weak. It should not be relied on exclusively in making agency decisions about change. Other factors may have been present that could have caused the change, and they should be considered before deciding what to do next. Task 3. Compare the latest overall outcomes to pre-established targets. Targets might be those set specifically for each service or identified in a city-wide strategic planning process, such as an MDG-based strategy. The municipality and its departments might select the targets based on the results achieved in previous years, by examining the targets included in the municipalitys strategic plan, by examining the targets included in MDG plans established by the national government, by those targets used in other governments, and by a combination of these. Setting targets is discussed further in Step 6. Task 4. Compare the programs outcomes to those of similar programs —and to any outside benchmarks, such as to performance levels achieved by other local governments. Actual values for each performance indicator should be compared to targets set by the government, such as may be done as part of the municipalitys budgeting process. Indicators for which the actual values are much worse, or much better, than the targets should be identified and an attempt made to identify why the difference occurred. (Target setting is discussed further in Step 6.) If comparable data are available on a performance indicator from other similar programs, these data can be used for comparisons. These other programs might be located within the agency, in other agencies in the municipality, or in other jurisdictions. If substantially better outcomes have been achieved elsewhere, ask program staff to assess why. For MDG performance indicators, data are becoming increasingly available on the internet, at least at the country level. In the future such data may also become available on individual municipalities. For example, Appendix E shows that the available country-level information for the indicator "proportion of population using improved sanitation facilities" in urban Kyrgyzstan was 75% in 2004. This value gives at least a rough idea of what has been achieved in other countries - and, thus, can be used as a benchmark for a municipality. For some indicators, external standards may be, or become, available in the future, against which to compare a jurisdictions own values, such as a central governments drinking water and air quality standards. These standards can even be built into the performance indicator. For example, a municipality 55
  • might use an indicator such as "Percent of days in which water quality did not meet national water qualitystandards." If not already built into the performance indicator, the level measured, such as the amount ofa certain pollutant in the municipalitys air or water, can be compared to the national standard. EXAMINE "BREAKOUT " DATATasks 5, 6, and 7. Breakout and Compare Outcomes by Workload and Service Characteristics (Task 5)and to Previous Reporting Periods (Task 6) and to Targets (Task 7)These three tasks are likely to be done jointly and so are discussed together. Here ways to examine thatinformation are discussed.Examine the breakouts for each outcome indicator to assess where performance is good, fair, or poor.Compare the outcomes for various breakouts such as: • Customer characteristics (Do females have substantially more, or less, health problems or education than males?); • Organizational units (Does street cleanliness differ substantially among geographical areas served by different waste collection crews); • Workload difficulty (Does it take substantially longer for municipality employees to process certain types of business-permit applications than other types); and • Type and amount of service (Are there substantial differences in the ability of clients of the municipalitys employment training program for training approaches that vary as to the type, or amount, or training provided.)For any of these subgroups where the service outcomes appear to have been particularly bad, the agencyshould seek out the reasons and take corrective action. (This will be discussed further under Step 7.)For subgroups whose performance appears to have been particularly good, the municipal agency shouldseek explanations to help it assess whether these successes can be transferred to other groups. Forexample, if the outcomes for younger clients are particularly good for a particular employment training program, theagency might consider actions directed toward improving the outcomes for other age groups,or it might reconsider whether that particular type of program is appropriate for the other age groups.Comparing breakouts across organizational units will indicate which units have particularly weakoutcomes—and need attention, such as training or technical assistance. This information can also be used as a basis forrewards (whether monetary or non-monetary) to persons or organizations with particularlygood outcomes or efficiency levels.Exhibit 2-11 listed a number of categories of breakouts likely to be relevant to municipalities of programsfor which people are the primary workload (and not, for example, road maintenance programs for whichother characteristics, such as average daily travel or soil conditions, are likely to be the appropriatebreakout categories).Exhibit 4-3 provides an example of a report for a health program that provides comparisons across threeclient demographic characteristics (Gender, age group, and race/ethnicity) and three servicecharacteristics (number of sessions clients attended, the facility used, and the attending caseworker). 56
  • Exhibit 4-3. Sample Comparison of Multiple Breakout Characteristics Clients That Reported Improved Functioning After Receiving Health Care Characteristic Number Considerable Some Little No of Improvement Improvement Improvement Improvement Clients (%) (%) (%) (%) Gender Female 31 10 19 55 16 Male 43 30 40 21 7 Age Group 21-30 13 23 31 31 15 31-39 28 21 32 36 11 40-49 24 21 29 38 13 50-59 9 22 33 33 11 Race/Ethnicity African-American 25 32 20 32 16 Asian 5 0 60 20 20 Hispanic 20 15 40 40 5 White/Caucasian 24 21 29 38 13 Number of Visits 1-2 13 15 8 54 23 3-4 21 24 33 33 10 5+ 40 23 38 30 10 Facility Facility A 49 24 27 35 14 Facility B 25 16 40 36 8 Caseworker Health Care Worker A 19 26 26 42 5 Health Care Worker B 18 11 39 33 17 Health Care Worker C 18 6 17 56 22 Health Care Worker D 19 42 42 11 5 All Clients 74 22 31 35 12Source: Analyzing Outcome Information: Getting the Most from Data, The Urban Institute, 2004.Comparisons can be made of outcomes both within and among each characteristic. For example, Exhibit4-3 indicates that, for the reporting period, the health care program achieved considerably poorer outcomes forfemales than males. The agency should ask such questions as: Why did this occur? Has this also been the case inprevious years? How close were these actual results to the targets set for these groups? The exhibit also indicates that mostclients who had attended only one or two sessions showed little or no improvement. In addition, most of the clients ofhealth care worker C showed little or no 57
  • improvement. It appears likely that many females only attended one or two sessions and had health careworker C. (These possibilities can be checked by cross-tabulating the data for these characteristics.)This example also illustrates the danger of jumping to conclusions too soon. Does the data in Exhibit 4-3show that health care worker C is a poor health provider? Not necessarily. For example, health care worker C mighthave assisted many females under circumstances where they could only attend one or twosessions. As typical with data, the data indicate what has happened, but more information on why ithappened is almost always needed before actions should be taken.To make comparisons more meaningful, the analysis should examine each organization units outcomesby a variety of relevant breakout characteristics, such as customer demographic characteristics anddifficulty of the incoming workload. Exhibit 4-4 illustrates a breakout for a program providing assistance to smallbusinesses. The analysis examines the outcomes for each of three small business assistance offices for each of threelevels of difficulty the client businesses were experiencing at the time they camein for help. The footnotes in the exhibit indicate the type of action a program might take in light ofsuch outcome data. Exhibit 4-4. Outcomes by Organizational Unit and Difficulty of Pre-service Problems (Small Business Assistance Program) Percent of Clients Whose Outcomes Have Improved 12 Months After Intake aDifficulty of Small business Small business Small business All unitsproblems at assistance office 1 assistance office 2 b assistance office 3 cintakeMinor 52 35 56 47Moderate 35 30 54 39Major 58 69 61 63Total 48 44 57 50a Tables suchas these shouldalso identifythe number ofclients in eachcell. If anumber is verysmall, thepercentagesmay notbe meaningful.bOffice 2clients withminorproblems atintake havenot shown asmuchimprovementas hoped.Office 2should lookinto this(such as identifying what the other offices are doing to achieve their considerably higher success rates), report on theirdifficulties, and provide recommendations for corrective actions.c Asubstantialproportion ofoffice 3clients withmoderate
  • problems atintakeshowedimprovement.The programshouldattempt to find out what is leading to this higher rate of improvement so offices 1 and 2 can use the information. Office 3 shouldbe congratulated for these results.Source: Performance Measurement: Getting Results. 2nd Edition. Washington DC: The Urban Institute Press, 2006, p. 125.Finally, relating outcomes to the type and level of service provided to individual customers can be veryhelpful to program managers in helping them determine what procedures work and which do not. To do this, the programwill need to identify for each customer the type or amount of service provided andconnect that information to each of the relevant outcome indicators.For example, suppose an employment-training program is trying different ways to train unemployedpersons, such as by using different degrees to which the training relies on small versus large group training sessions.This information should be recorded as to which clients were trained in each way and then related to success in gettingeach of those clients employed. The overall success rate for clients in 58
  • each group can then be compared to identify which, if any, of the approaches had a substantially higherrate of employment success.Exhibit 4-5 illustrates a report comparing employment success for two lengths of training programs, and at the sametime, compares these for clients with different levels of education at the time of entry into the employment-trainingprogram. It indicates that persons with little education were helped much more by the long program, but not those clientswith more education. Therefore, the training program could save money, and get more overall benefit, by using the shortprogram only for persons with more educationand the long program only for those with little education. Exhibit 4-5. Comparison of Different Program Variations Percent of Clients Employed Three Months after Completing Service Education Level at Entry N Short Program Long Program Total Completed High School 100 62% employed 64% employed 63% (of 55 clients) (of 45 clients) (of 100 clients) Did not complete high school 180 26% employed 73% employed 48% (of 95 clients) (of 85 clients) (of 180 clients) Total 280 39% 70% 54% (of 95 clients) (of 130 clients) (of 280 clients) Is action needed? Encourage clients who had not - rather than had - completed high school to attend the long program. Use these figures to help convince clients of the longer programs success with helping clients secure employment.Adapted from: Analyzing Outcome Information: Getting the Most From Data. Washington DC: The Urban Institute, 2004, p. 20.Another example: Suppose a road repair program finds it has a choice of various road repair materials. Itcould use each type of material on a random selection of streets (or on groups of streets selected so they have similartraffic and soil conditions). If the program several months later assessed the condition of each group of streets road, itwould have evidence as to which type of material held up better. Combinedwith cost information on each type of material, the program could then make an informed decision as towhich type to use in the future. EXAMINE FINDINGS ACROSS INDICATORSTask 8. Examine Consistency and Interrelationships Among Inputs. Outputs, and OutcomesThe amount of input (e.g., funds and staffing) should be consistent with the amount of output. The amount ofoutput, in turn, should be consistent with the amount of intermediate and end outcomesachieved.If an agency has not been able to produce the amount of output anticipated, the amount of outcome thatcan be achieved is also likely to be less than expected. Similarly, if the expected intermediate outcomes did not occur ashoped, end outcomes can be expected to suffer as well. These relationships do not alwayshold, but they sometimes can help explain why measured outcomes were not as expected.For example, if a program unexpectedly lost staff during the year, this would likely be an important 59
  • reason for a smaller number of helped service customers. Similarly, if fewer clients than budgeted forcame in during the year, this would also likely be an important reason for fewer customers helped.Task 9. Examine the Outcome Indicators Together to Obtain a More Comprehensive Perspective onPerformanceMost programs will need to track more than one outcome indicator. It is tempting to only examine these indicatorsseparately. However, programs should also examine the set of outcome indicators together inorder to obtain a better understanding of performance and, thus, what improvements may be needed.A program might, for example, examine whether the extent to which reductions in intermediate outcomevalues, such as reduced agency response times, are associated with greater subsequent successes, such asin reduced number of contagious disease victims or reduced fire losses).Another example: If the "number of persons assisted" by program staff has declined, it can be expectedthat the "number of assisted businesses whose situation improved" would also decline. However, declines innumber assisted might also lead to improvements in other outcomes. Smaller numbers of persons assisted might leadto higher rates of success -- because program staff might then be able to spendmore time with each business.)Another example: The manager of a traffic safety program might find that an indicator based on trainedobserver ratings showed traffic signs and pavement markings were in satisfactory condition, while anotherindicator, based on findings from a citizen survey found that a substantial percentage of citizens had problems with thesigns. A third indicator showed increasing traffic accidents. A fourth indicator reported a high percentage of delayedresponse times to requests to fix traffic sign problems. The agencywould need to consider all of these findings (and others) in determining what action, if any, is needed.A program sometimes has directly competing objectives. In such cases, the outcomes relating to thesemultiple objectives need careful examination to achieve a reasonable balance. For example, reducing school dropoutsmight lower average test scores because more students with academic difficulties were tested. Improved water qualitymight be associated with reduced economic performance in an agricultural industry. Agencies need to examine thesecompeting outcomes together to assess overall programperformance. MAKE SENSE OF THE NUMBERSTask 10. Identify and highlight key findingsPerformance measurement systems are likely to provide large amounts of data each reporting period -- too much data formany, probably most, managers and staff to absorb. Therefore, an important elementof the data analysis process is to establish a process that highlights the data that most warrants attention.A simple step is to ask someone in the program to examine the comparisons, such as those describedabove (Tasks 2-9) and make judgments as to which of the performance findings are important. The examiners mightprepare written highlights, or merely flag, the data (such as by circling or marking thatdata in red).A more formal procedure is to establish a formal "exception reporting" process. The program establishestarget ranges within which it expects the values for its indicators to fall and concentrates on indicators whose valuesfall outside those ranges. (This approach is an adaptation from the field of statistical qualitycontrol, sometimes used by manufacturing organizations.) 60
  • Once chosen, the target ranges can be programmed into performance report software so that performanceindicator values falling outside them are automatically highlighted for program attention.Task 11. Seek Explanations for Unexpected FindingsA performance measurement system should explicitly call for explanatory information along withoutcome and efficiency data. This is particularly important in situations where the latest outcome data areconsiderably worse than anticipated.The municipalitys chief executive officer and its elected officials might require explanations for belowexpected performance (as has been done by New Zealand and some states in the United States).A municipality might obtain explanations from such sources as the following: • Discussions with program personnel • Discussions/focus groups with program customers • Responses to open-ended questions on customer surveys • Examination of the breakout data (as suggested under Task 5-7) • Special examinations by teams selected by the program • In-depth program evaluationsExplanatory information can take many forms. Probably the one most used is that of qualitativejudgments provided by program personnel as to why the outcomes were the way they were. Such judgmentsmight be mere rationalizations and excuses; however, program personnel should beencouraged to provide meaningful information.Special studies, such as in-depth program evaluations, can be expensive and time-consuming and, thus,can be done only on a small fraction of an agencys programs in any given reporting period.Between these two extremes, program personnel should usually be able to provide a variety ofinformation, some quantitative and some qualitative, that will reveal the reasons for problems. Likelyreasons include the following, each of which requires a different program response: • Staff and/or funding changes, such as cutbacks • Legislation or regulatory requirements that have changed or been found inappropriate • Poor implementation (for example, inadequate training, inexperience, or motivation of staff) • External factors over which the program has limited or no control, such as increasingly difficult workload; significant change in the international, national, provincial, or local economy; unusual weather conditions, e.g., unusually heavy rains can increase runoff, leading to increased pollution of rivers and lakes); new international pressure or competition; new businesses starting up or leaving the jurisdiction (thus affecting outcomes such as employment and earnings); and/or changes in the composition of the relevant population • Problems in the programs own practices and policiesAnother source of explanatory information is the responses to open-ended questions on customer surveys.As noted in Step 3, if an agency surveys its customers to obtain performance data, the questionnaire should giverespondents the opportunity to explain the reasons for the ratings they gave (particularly any poor ratings) and to providesuggestions for improving the service. Tabulations of responses provide clues as to the causes of poorer-than-desiredperformance and may provide useful suggestions for improvement. 61
  • Sometimes breakouts can provide explanations. If, for example, the program breakouts "failures" intolikely reasons for these failures, this information can be tabulated across all customers to identify likely reasons for theless than satisfactory performance. For example, traffic safety agencies might identify the causes of traffic accidents.Performance indicators should identify the total number of accidents and disaggregate the total into categories bycause, allowing the agency to focus on causes it can change. Traffic accidents due primarily to mechanical failureor bad weather, for example, are much less controllable by municipal agencies than accidents related to problem-intersections, or poor traffic signsand signals.Another example: Surveys that seek information on citizen participation rates (such as citizen use ofpublic transit, libraries, parks, and other services) can ask non-participating citizens why they did not usethe service. Such reasons might include:a. Did not know about the serviceb. Service times were inconvenientc. Service locations were inconvenientd. Heard that the service was not goode. Had previous bad experiences with the servicef. Cant afford to pay for the serviceg. Dont need the serviceh. Dont have time for the serviceResponses (a) through (f) refer to things that potentially can be corrected by the municipality. Forexample, if a substantial proportion of the respondents indicated that the hours of operation were when the respondentshad to be at work, the agency could consider whether changes in its service hours are feasible. The last two responses,dont need and dont have time for the service, are reasons over whichthe agency probably has little or no influence. No action by the municipality is likely to be available forthese categories.Note that an agency may not be able to take direct action itself but may still want to provide suggestionsto other levels of the government or to higher-level governments. For example, while vehicle mechanical failures may notbe within the control of local governments, the central government can take action if significant patterns of suchfailures occur. Municipalities should recognize that responsibility for some indicators presence is actually shared byother organizations. Sharing may be among multiple agencies within the government, with other levels ofgovernment (such as the district, provincial, and central levels), and even with other sectors of the economy, such asbusinesses, churches, and even individualcitizens (such as the responsibility of citizens to get their children to school).The important point here is that by properly designing data-gathering instruments and analyzing theresulting data, the program can obtain important clues as to what the problems are and what the programcan do about them.Additional suggestion: Categorize each outcome indicator by the degree of influence the program hasand include this information in performance reports. The degree of influence might be expressed in three categories, suchas little or no influence, modest or some influence, or considerable influence. The agency using such categories shoulddefine them as specifically as possible and provide illustrations for each. Such categorization helps users of theinformation understand the extent to which the agency is likely to be able to affect outcomes. (For an outcome indicatorto be included in a programs set, the program should have some influence, even if small.) The outcomes can then bebroken out by these difficultycategories. This will provide users of the outcome information with better, and fairer, information forinterpreting the outcome data. 62
  • For most outcome indicators, agencies and their programs will be less able to influence end outcomesthan intermediate outcomes. Even most intermediate outcomes are not likely to be fully controllable by any agency.This does not absolve agencies from the responsibility to recognize the amount of influencethey do have and to take action to attempt to improve the outcomes for citizens.Task 12. Provide Recommendations to Officials for Future Actions, including Experimentation WithNew Service Delivery ApproachesThose persons who examine the data, whether they are professional analysts, staff, or managers, should,where possible, also provide recommendations to other officials based on what the performance informationshows. Those who have examined the data in some detail, are likely to gain insights as towhat should, or should not, be done.Elsewhere, we have emphasized that users of the information should not jump to conclusions and takeaction based solely on the performance data. Normally the recommendations likely to be appropriate are to undertakeother types of examinations into causes and ways to alleviate these causes, as well as costinformation. When performance data indicate the presence of problems, the solutions are often not clear.However, if based on the findings from procedures such as those discussed above, analysts findperformance problems, they should be able to suggest one or more of the following types of actions: • A wait-and-see strategy in the expectation that the unsatisfactory outcomes are aberrations— the result of a temporary problem rather than a trend—and will correct themselves in the future. • Specific corrective procedures (with provisions that when future outcome data reports become available, these should be assessed to determine whether the actions appear to have resolved the problems) • Further examination of program elements that the explanatory information that was provided indicated might be causing problems • An in-depth evaluation to identify causes and what corrective actions should be taken • An experiment to test a new procedure against the current oneThe experimental option included above may be particularly interesting to officials who like to innovate,to try out new ways to provide services. Sometimes it will be practical to design a "simple" experiment with a new, ormodified, service approach and use performance data to assess the results before making a commitment to the new approach.A program that has an ongoing performance measurement system canuse data from that system to help evaluate the new procedure.An example is provided in Exhibit 4-6. The exhibit compares computer to manual processing ofeligibility determinations in a program that applied the new process to part of its incoming work. 63
  • The program then tracked two Exhibit 4-6.outcomes separately for each Computer versus Manualprocedure for a few months and Procedures for Processingfinally compared these outcomes for Eligibility Determinations*the two procedures. Processing Error Rate Applications TakingA simpler, more common but less Procedure (%) More Than One Daypowerful, approach to examining new or toProcess(%)modified processes occurs is to only 18 Computer 9compare outcomes for the old service Manual 8 35procedures with the outcomesthat occurred after introduction of the new * About 250 applications processes by each procedureprocedures. Exhibit 4-2 (the tableshowing "Comparison of OutcomesBefore and After Process Change," Source: Performance Measurement: Getting Results. Washington DC: Theshown earlier in this Urban Institute Press, 2006, p. 144.section) illustrates this approach.The above approaches are similar to a variety of standard program evaluation procedures. Exhibit 4-2 isan illustration of a pre-post program evaluation. The example in Exhibit 4-6 is an illustration of an experimentalapproach. If the incoming applications for eligibility can be assigned randomly to the two different processingprocedures, this would be a particularly powerful evaluation approach, an example of a "random assignment controlledexperiment." A full program evaluation would use the outcome data but add such steps as more extensive statisticalanalysis and an intensive search for explanations for theoutcomes.
  • 64
  • Step 5. REPORT PERFORMANCE RESULTS THE IMPORTANCE OF GOOD REPORTINGThe importance of good presentation of performance has only recently begun to be fully recognized.Technology advances are making it much easier to provide clear attractive reports. The use of a variety of forms ofgraphics, and even color is becoming much easier and less costly to use. In addition to thetraditional tables, performance data can be provide in the form of bar charts, trend charts, pie charts,maps, and other graphics. And photographs can be added to illustrate performance results.How the findings are reported is likely to be as important as what is reported.Of course, report presentation also vitally needs attention to content. What information needs to be provided to thevarious audiences? The focus in this step is on written, not oral, reporting. Oral reportingtechniques are also important but are beyond the scope of this manual.Below, we first address internal reporting and then external reporting (when reports become public). INTERNAL REPORTINGInternal performance reporting is vital to stimulating service improvements. The form, substance, anddissemination of performance reports play a major role in proving useful feedback.Key questions to ask yourself regarding internal performance reporting include: • Are the reports clear? • Do they contain useful, important information? • Are they timely? Are the data reported sufficiently frequently? When reported are the data in the reports reasonably up-to-date? (Some performance indicators may need to be reported more frequently than others. For example, data from household surveys are likely to be needed less frequently, only annually or quarterly, while performance reports containing incidence of crimes, fires, water main breaks, etc, need to be reported considerably more frequently -- to enable managers to take timely corrective actions. • Are the reports adequately summarized or highlighted to allow very busy managers to digest the information in a reasonable amount of time? • Are they disseminated to all those who need and can use the information? Often missing is dissemination to persons who are most involved and can do something about the data - the first line staff! 65
  • A program is likely to find it useful to track a large number of outcome indicators for internal use.However, for external reporting, a considerably shorter list is likely to be appropriate. The municipalitys and departmentshighest officials and the local legislative body are likely to want a relatively short list ofindicators. SOME EXAMPLES OF REPORT FORMATSThroughout the world, it is surprising how difficult performance reports have been to read. Here a numberof formats are illustrated. The first formats use tables. The intent of each is to illustrate the presentationof comparisons - a key way to make performance data useful and more interesting to readers.These tabular formats (which use hypothetical data) can be used for both internal and external reports.They are a sample of the many formats that can be constructed based on the special needs of a program.FORMAT 1, EXHIBIT 5-1, compares actual outcomes to targets for at least one earlier period, and sets new targets, for each of several outcome indicators. This is a very useful format when setting new targets for the future.FORMAT 2, EXHIBIT 5-2 compares actual outcomes to targets for both the last and current reporting periods. It does this for each of a number of outcome indicators. This format is likely to be a key one for most programs.FORMAT 3, EXHIBIT 5-3, is similar to format 1 but shows values both for the current reporting period and cumulative values for the year. This format is useful for outcome measurement systems that provide data more frequently than once per year (as is usually desirable).FORMAT 4, EXHIBIT 5-4, compares the latest outcomes for various geographical locations. This format is useful for making comparisons across any breakout categories identified by the program. For example, a program may want to illustrate comparisons across managerial units or particular customer characteristics. To do this, the program would change the column labels in Exhibit 7-3 to correspond to the relevant breakouts.FORMAT 5, EXHIBIT 5-5, displays outcome data for one indicator for a number of client demographical and service characteristics (in this case satisfaction with several services across several cities). This format permits a number of comparisons for assessing for which categories of clients and, for which forms of service, results have been good or poor. The format is likely to be highly useful for internal reports, in identifying where improvements are likely to be needed. This multiple cross-tabulation enables program staff to identify which respondent characteristics show unusually positive or negative results for a particular outcome indicator. The format can be used to report on any indicator for which data on a variety of customer or program characteristics have been obtained. These findings suggest that the program should seek to find out why the program had such a low success rate with females and why health care worker C had such a low success rate. Data obtained in later reporting periods will indicate whether any changes made improved outcomes for patients in those groups.The above formats use tables to present the data. Other graphic presentations can be more attractive tousers, especially for external consumption. A picture is often worth 1,000 words (or numbers)! Optionsinclude: 66
  • Graphs Graphs are especially good for showing trends—the values of the indicator plotted against time, perhaps by month, quarter, or year. Exhibit 5-6 is an example.Bar charts These are an excellent way to show comparisons. Exhibit 5-7 displays a series of bar charts rating a number of outcome indicators on New York Citys transit system. Similar ratings were provided for each of the subway systems 19 lines. The published report used color, making the presentation considerably more attractive than this exhibit. (These data were obtained, assembled, and analyzed by a citizens group, the "Straphangers" Campaign.).Maps Mapping performance information has become very popular as inexpensive mapping software has emerged. These are a dramatic way to present geographical data, such as by neighborhoods or districts within municipalities. To be clear as to which neighborhood is which, the maps might show numbers inserted in each geographical area rated, use shading or colors to distinguish various rating levels, both (as shown here), or be accompanied by a table displaying the values for each neighborhood. Exhibit 5-8 through the use of map shading, compares ranges of low-weight births for each neighborhood. Next to each of these maps in the report a table was presented that provided the actual values for each neighborhood. For some map presentations, if ample room is available, the actual values might be included on the map itself.The geographical areas might display such indicators as the percent in each neighborhood who areemployed, are healthy, are satisfied with particular services, have low rates of feeling unsafe walkingaround their neighborhoods during the day or at night, and so on.Some caveats on graphics. With the rise in easy computer graphic capability, has arisen a tendency tooverdo the presentations, sacrificing clarity for artistic endeavor. For example, "pie charts" look nice but they are not verygood at making it easy for readers to determine size differences. Sometimes performance reports use a three-dimensional option in presenting bar charts. Again this can make it difficult to judge differences. If the display is onlyintended to do is give an overall impressions, this may be ok, but not if the intention is to provide carefulcomparisons. Similarly, when using maps, thetemptation exits to put too much information on the map, reducing readability. IDENTIFY THE REPORT HIGHLIGHTS A considerable danger in performance measurement is overwhelming public officials, both on the executive and legislative side, with indicator data. Whoever are responsible for preparing the performance report (whether in the agency, in a central management or budget office, or in the legislature) need to identify and extract what they believe to be the key findings from the data. This becomes even more important as agencies expand their reporting of indicator breakout data. What are the important issues, problems, successes, failures, and progress indicated by the data? What is likely to be of concern and interest to the audience of the report? Any important missing outcome should also be identified. Thus, performance reports should contain, not only the data but also a summary of the reports highlights—emphasizing information that warrants particular attention—to help users focus 67
  • quickly on the important findings. These highlights should include both "success" and "failure" (problem) stories. The summary should be a balanced account to reduce the likelihood that readers will consider the report self-serving.Explanatory information should be included as part of the report and should be clear, concise, and tothe point.Reports, especially those going outside the program, should also identify any actions the program hastaken, or plans to take, to correct problems identified in the outcome report. This step, along with explanatoryinformation, can help avert, or at least reduce, unwarranted criticism. Highlighting can bedone by: • Writing our the key findings and issues raised • Physically highlighting the data, such as by circling or marking in red the data that raise flags (such as illustrated in Exhibit 4-3 in Step 4) • Combinations of thesePreferably each performance report would contain selected performance comparisons (such as breakoutsshowing relevant differences in outcomes for various demographic groups) to help in identifying the keyfindings. (See Step 4 for a discussion of comparison options.) PROVIDE EXPLANATORY INFORMATION IN PERFORMANCE REPORTS Explanatory information provides information important for readers interpretation of the data. It also gives program managers and their staffs an opportunity to explain unexpected, undesirable outcomes, thus, potentially reducing their concern that the data will be misused (and against them). Explanations can be: 1. Qualitative (including judgments), quantitative, or a combination. 2. Provide explanatory information when any of the comparisons show unexpected differences in outcome values, for example when: a) the actual value for an outcome indicator deviates substantially from the target value (better or worse) or b) the outcome values show major differences among operating units, categories of customers, or among other workload units 3. Distinguish internal from external explanatory factors. Program personnel are likely to have at least some influence over internal factors. Such factors include significant unexpected loss of program personnel (or other resources) during the reporting period. External factors might include unexpected changes in national economic conditions, highly unusual weather conditions, or unexpected loss (or gain) of industry within a particular jurisdiction. 4. Incorporate the findings from any special evaluations that provide an in-depth examination of the program and its achievements. Such findings are likely to supersede the outcome data collected as part of the routine outcome measurement process. At the very least, recent program evaluation findings should be given prominence in the presentation of a programs outcomes. Such studies are likely to provide considerably more information about the 68
  • impacts of the program than outcome data alone can reveal. EXTERNAL REPORTINGExternal reporting of performance information is a major way for a municipality to become"accountable." It enables elected officials, interest groups, and citizens to see what they are getting for their money—at leastto some extent, since the information inevitably will be filtered, at least somewhat,by the internal organization. These reports can be called "How-is-the-Municipality-Doing?" reports.External reporting also has the potential operational use of motivating the municipal government to dobetter on the performance indicators being reported. Such motivation is likely to become stronger as moremunicipalities provide external performance reports, permitting each municipality to compare itsservice outcomes to those of other similar agencies. These comparisons, however, have dangers. Theycan be misleading and unfair for a variety of reasons, such as comparing agencies that operate under considerablydifferent environmental conditions or that use very different data collection procedures. On the whole, however,comparisons will be made and can serve a useful motivational, as well asaccountability, function.Web-based reporting is beginning to replace at least some paper reports. Many local governments havetheir own web sites. Individual agencies in many of the larger governments also have their own sites. Some of thesehave placed performance data on their web sites. This trend is likely to continue, makingelectronic reporting a major way citizens and interest groups will obtain performance information.Municipalities are beginning to even include data on selected performance indicators for each of theirneighborhoods or districts. (For example, New York City posts data for each of its 59 citizen communityboard areas). Citizens who have computers can enter their addresses and find the relevant data.Key issues for web-based performance reporting are: • Many citizens, some who are likely to be the most concerned about low levels of service, do not have ready access to the internet. • Many persons who have access are not likely to look for performance unless some particular issue faces them. • The information is often not summarized in any way, leaving it to the user to extract the highlights. • Often the performance information on the web sites is not kept up-to-date in a timely manner. It does not contain the latest available data.The following are suggestions for reporting outside the agency. • Be selective in reports as to which, and how many, indicators are included. Focus on those indicators most likely to be of interest to the audience (probably not on output or efficiency indicators). Selectivity does not mean selecting only those indicators that make the agency look good—reporting needs to be balanced in order to be credible. • Pay particular attention to making the reports easily understandable. As discussed earlier, use charts and graphs perhaps to supplemental tables. Use color if practical. • Obtain feedback on the reports periodically from major constituencies, such as elected officials, funders, and the public (perhaps obtaining feedback from the public by use of focus groups). Ask about the usefulness and readability of the performance reports. Use the feedback to help tailor 69
  • future performance reports to the particular audience. OTHER INFORMATION THAT SHOULD BE INCLUDED WHEN REPORTING PERFORMANCE DATA• IN BOTH INTERNAL AND EXTERNAL REPORTSIn addition to explanations for unexpected findings, performance reports should also contain suchinformation as the following, even if only contained in footnotes. These are needed for both external andinternal reports. Some of this information would likely be provided in footnotes. • Each performance indicator should be clearly defined, such as what the indicator covers. This includes the time period covered by the performance data. • Any important uncertainties or limitations in the data should be identified. • For indicators based on survey data (whether the surveys are of citizens or through surveys made by trained observers), the following additional information should also be provided: • The total number of items surveyed, such as number of respondents - both in total and for each category of respondents for whom data are presented in the report, such as each gender or each racial/ethnicity group) • The response rates (an important indicator of the likelihood of non-sampling error) • The dates when the survey was conducted • How the survey was conducted (e.g., in-person, phone, mail, web-based, etc.) and • What organization conducted the survey • Any substantial changes in the performance indicators and the data collection procedures from previous reporting periods should be identified. (Changing these without good reasons being given can give the appearance of being selective to hide unpleasant findings.) • If indices are reported: (a) the individual elements that comprise the index should be clearly and fully identified; and (b) the values for each element that comprise the index should be readily available. • Programs may also want to include information on the extent to which the program can influence the indicator values. WHAT IF THE PERFORMANCE NEWS IS BAD?Almost certainly, every performance report will include some indicators showing results significantlybelow expectations (such as compared to the targets for the reporting period). A major function of performancemeasurement systems is to surface problems of below-par outcomes so that those who can do something about them arealerted and, after corrective actions are taken, can assess whether the actionshave produced the desired results.Agency and program officials should include with their performance reports both explanations as to whyany poor outcomes occurred and identify the steps taken, or being planned, to correct the problem.One city agency head once said "If the data look good, I will take the credit. If the data looks bad, I willask for more money." This is another approach. 70
  • DISSEMINATION OF PERFORMANCE REPORTSPerformance reports should be disseminated to everyone on the programs staff as soon as possible afterthe data become available. Program personnel should be given the opportunity to provide any additional relevantexplanatory information for the final formal report before it is released outside. This will encourage all programmembers to feel they are part of a team whose purpose is to produce outcomes thatare as good as possible. As will be discussed further in Step 7, the program manager after eachperformance report, might hold staff meetings on the outcome data to identify any actions that theperformance data indicate are needed.The performance report—including data, explanatory information, and the highlights or summary—should then be provided to offices outside the program. A key question to consider is how much detail should be givento those outside the program. Avoid overloading outside readers with detail. Select indicators that are likely to be ofmost interest to those outside the program. Breakout data also need to be provided, but selectively, to avoid overwhelmingreaders with too many numbers. The breakouts provided should be those considered most important to report users.Outcomes broken out by customer demographic characteristics, such as race or ethnicity, for example, are oftenquite important and ofconsiderable interest. More breakout detail can be included in appendixes.Newspapers, radio, and television and (increasingly) the Internet are all ways to disseminate theperformance material. Here again, the program will need to decide what detail will be of interest to these groups.Inclusion of explanatory information and statements of corrective actions already taken, orplanned, can help defuse negative reactions to data that appear to represent poor performance.External performance measurement reporting is of special concern to agency officials, who can beexpected to be particularly apprehensive of performance reports provided to the news media. The objective shouldbe to provide media representatives with an understanding of what the data tell and whatthe datas limitations are.Avoid choosing just data that make the agency look good—tempting as this may be. Over the long run,the media, special interest groups, and the public (possibly already suspicious of government) will catchon, and the reports will lose credibility.Summary annual performance reports can be an effective way to communicate with citizens and increasepublic credibility, as long as they are user-friendly, timely, and provide a balanced assessment ofperformance. 71
  • Exhibit 5-1 Reporting Format 1: Performance Vs. Targets and setting New Targets Indicators Survey Target Survey +/- Target 2004 2005 2005 2006Percent of citizens satisfied with 75% 82% 91% +24% 93%cleanliness in the streetPercent of citizens satisfied with 48% 60% 72% +14% 75%cleanliness in the neighborhoodsPercent of households receiving 70% 78% 75% +5% 78%regular garbage collection servicePercent of cleaning service cost 54% 75% 76% +22% 85%recoverySource: Presentation on Street Cleanliness from Budget Presentation for 2006. Pogradec, Albania. Exhibit 5-2 Reporting Format 2: Actual Outcomes versus Targets Last Period This Period Outcome indicator Target Actual Difference Target Actual Difference Percent of children returned 35 25 -10 35 30 -5 to home within 12 months Percent of children who had 20 20 0 15 12 +3 over two placements within the past 12 months Percent of children whose 50 30 -20 50 35 -15 adjustment level improved during the past 12 months Percent of clients reporting 80 70 -10 80 85 +5 satisfaction with their living arrangements Note: This format compares actual outcomes to targets for both the last and current periods. Plus (+) indicates improvement; minus (-) indicates worsening.Source: Performance Measurement: Getting Results. 2nd Edition, Washington, DC: The Urban Institute Press,2006, p. 181. 72
  • Exhibit 5-3 Reporting Format 3: Actual Values versus Targets Current Period Cumulative for Year Years Outcome indicator Target Actual Target Actual target Percentage of parents reporting 75 70 70 65 70 knowledge or awareness of local parental resources center activities Percentage of parents reporting 50 65 50 60 50 that parental resource centers led to their taking a more active role in their childs development or education Note: This format shows cumulative values for a year rather than for previous reporting periods. It will only be useful for outcome measurement systems that provide data more than once a year.Source: Performance Measurement: Getting Results. 2nd Edition, Washington, DC: The Urban Institute Press,2006, p. 181. Exhibit 5-4 Reporting Format 3: Outcomes by Geographical Location Geographical Location United Outcome indicator Eastern Central Mountain Pacific States Percent of schools 30% 15% 20% 35% 29% participating in the program Number of students enrolled 1,500,000 600,000 850,000 1,950,000 4,900,000 in courses that had not been available previously Percentage of students 65% 90% 85% 75% 77% reporting increased interest in school because of distance-learning activities in their classes Note: The format makes comparisons across any breakout categories identified by the program, such as managerial units, individual projects, schools, school districts, or particular student characteristics.Source: Adapted from Performance Measurement: Getting Results. 2nd Edition, Washington, DC: The Urban Institute Press, 2006, p. 182. 73
  • Exhibit 5-5 2004 Survey: Citizens rating of satisfaction with the most important services (averages) 5 4 Satisfaction Rating 3 2 1 0 Water Solid Street Sewage Roads Street Storm Public Waste Cleaning Lights Sewer Transport Zestaponi Poti Lagodekhi Ozurgeti Mtskheta Sagarejo Tkibuli(1 = very satisfied; 5 = very dissatisfied)Note: Services included in this chart were listed on the most importance services lists for at least three of the five pilot cities.Source: USAID Local Government Reform Initiative (LGRI) in Georgia, Georgia Customer Survey 2004. Exhibit 5-6 Example of Use of Graphs Source: Progress Report on Regional Development Strategy of Fier Region, UNDP Albania, November 2005 74
  • Exhibit 5-7 Example of Use of Bar ChartsSource: State of the Subways Report Card, NYPIRG Straphangers Campaign, Summer 2004 (http://www.straphangers.org/) 75
  • Exhibit 5-8 Example of Use of Maps (Percent of Low-Weight Births by Neighborhood Cluster, Washington, D.C., 2004)Source: Every KID COUNTS in the District of Columbia: 13th Annual Fact Book, 2006, D.C. KIDS COUNT Collaborative for Children and Families. 76
  • Step 6. SET MUNICIPAL TARGETS THE IMPORTANCE OF SETTING TARGETSSetting municipal targets for each performance indicator can be of considerable use to public managers,elected officials, and the public. Annual and long-range targets provide a roadmap for the jurisdictions and can be apowerful motivational tool for the government and its managers for improving service outcomes. This is especiallyso if: your municipality has some form of multi-year strategic plan; annual targets are set; and sub-targets are set foreach reporting period during the year (such as quarterly or monthly). Out-year targets, perhaps for five years into thefuture, can encourage long-range thinking by program personnel and reduce the temptation to over-emphasize currentresults at the expense of futureprogress.This step discusses the process for setting targets for individual performance indicators. Targets are thespecific numerical goals for individual performance indicators for some future period, such as the coming budget year.Governments throughout the world that use any form of program, performance, or results-based budgeting, include performance indicators in budget submissions and typically will include targetvalues for the budget year.Setting targets, at least annually, for each of your performance indicators can also be a highly usefulmanagerial, and policy-making, tool. For example, if quarterly targets are set for each of your public services at thebeginning of each year, and if the actual values for the quarter are calculated and reported,managers have the opportunity to review progress with their staffs and make decisions as to neededcorrections.Exhibit 6-1 is an example (from Albania) of a table of actual (2006) and targeted values (2009, 2012, and2015) for five performance indicators for: the countrys latest available values; the values of one of 12Albanian regions, and the latest available European Union values. 77
  • Exhibit 6-1 Table of Actual and Targeted Values for the National and Regional Government CURRENT AND FORECASTED INDICATORS ALBANIA NATIONAL KUKËS EU AVERAGE (DATATARGET INDICATOR AVERAGE REGION 2006 2009 2012 2015 FROM EUROSTAT UNLESS OTHERWISE NOTED)1.1 14.6 29.24 8% (2003 EU 1. Unemployment rate (%) 22 20 14 141.3 (INSTAT 2002) (2002, INSTAT) average)1.1 2. % of families benefiting 22.06 56.6 5.97% (1992, based 50 40 30 201.3 from social assistance (INSTAT 2002) (2001 district on average of 12 of level) EU countries) 3. Infant mortality1.1 rate/1000 20.5 16.3 (2000, MSH) (2002, INSTAT 13 10 7 4.5 4.5% (2002) and MoH)1.2 4. Water supply within 46.9 31.7 40 50 60 70 97.14% (1984) dwelling (%) (2002 NHDR) (2001, INSTAT) 5. Water running in average 38 15 40 75 100 98% (2000)1.2 day (% of the 24 hours) (2001 district level)Source: Albania National Report on Progress Towards Achieving the Millennium Development Goals, August 2004. 78
  • Here we discuss the various issues involved, particularly: • Guidelines that municipalities can use to help them set target values (including what factors should be considered in setting realistic and challenging targets); • Who should participate; and • Concerns about setting targetsTarget-setting is more an art than a science, since it inevitably requires making assumptions abut thefuture. However, below we provide guidelines to help governments and their agencies establishappropriate targets.To be useful to your municipality, challenging targets need to be set that consider your municipalitysown priorities, situation, and demographic condition. SET TARGETSBelow are suggested guidelines for setting targets for your individual performance indicators.Consider the amount of funding and number of employees expected to be available during the targetperiod (usually a year). The resources available to your municipality will be a major constraint to making substantial improvement on your key outcome indicators. This funding should include not only your own expected local revenues but also those revenues expected from outside sources, such as the central government, provincial governments, and donor organizations. Because major uncertainties may exist as to the revenues from some, or all, of these, consider setting "conditional" ("variable") targets with the projected value for the indicators dependent on the amount of resources actually received. Such conditional targets are discussed further later.Consider your own municipalitys previous performance. This baseline will almost always be a major factor in determining targets. Recent year performance has often been the primary basis on which government agencies have set their next periods targets. Use not only the most recent data but also data on the indicators for prior periods - to identify trends that might be expected to continue into the future. The quality of such targets depends considerably on the quality of the data the municipality has been collecting and the timeliness with which the data become available.Consider the performance levels reported by your central government and by other jurisdictions withsimilar activities and workload or customer compositions, when such information is available. Such information is becoming increasingly available, especially at the countrywide level. This is so for Millennium Development Goal (MDG) performance indicators. For instance, see Exhibit 6-2 for data related to access to water as provided in Armenias Poverty Reduction Strategy Progress Report. Exhibit 6-2 National Level Data on Relevant Indicators Indicator 2002 2003 2004 Share of people having sustainable access 94.8 94.1 95.4 79
  • to safe drinking water, percentage Share of households using springs (and/or 3.6 2.9 3.8 wells, rivers), percentage Share of households using water delivered 5.2 5.9 4.6 by water tankers Localities in Armenia may want to use such national information to set their own targets. In addition, for service areas in fields such as health and education, various international organizations are likely to be collecting and reporting data on other performance indicators similar to indicators in which your municipality is interested. It will probably be most useful to you to focus on countries that have similar characteristics to your own. Appendix E provides an extract from the World Health Organizations report "World Health Statistics" and an extract from a 2006 report on the MDGs. Each extract reports data from a number of countries on a number of outcome indicators. Exhibit 6-1 illustrates the data that might be available from other jurisdictions. In this case showing a countrys latest available national average and that for one of its regions (as well as the regions future years targets.) In the future, it seems likely that information on some performance indicators from other municipalities will become more available. (Clearly, the targets set at the various levels of government are related; provision is needed to achieve at least a basic compatibility among targets for the various levels.) Using information from other governments has problems. In particular, the definitions and data collection procedures used may be at least somewhat different from the ones you use. Second, the available data may be a few years old.Consider any targets set by your central government on performance indicators that your municipalityisusing. The countrys national targets should ideally be fully compatible with municipality and any other sub- national government targets. Coordination in target-setting between levels of government is likely to be needed, if not essential, especially in such key areas as health and education. For any targets set at the central level for local performance, work collaboratively with the central government to ensure that targets are realistic and useful. See Exhibit 6-3 for an example of this kind of useful collaboration between central and local governments. Exhibit 6-3 Targets for Maintenance and Operation of Education Facilities In 2004 the maintenance and operation of education facilities was formally delegated to local governments in Albania. In setting standards for the maintenance of facilities, a working group including the Ministry of Education and Science and local government representatives established eight standards for critical "National Interest Areas" regarding primarily the health and safety of the school facilities. Both parties worked on development of those standards and the design, application, and assessment of the standards. In an ongoing pilot program compliance with standards is being tested in three localities, and a decision has been made that "Targets are established by considering at least Existing baseline conditions of schools, and Available financial resources at both national and local level" 80
  • Source: "Summary of Conclusions. Developing Draft Standards for Maintenance and Operation of Pre-University Education Facilities in Albania." 2005.Identify any new developments—internal and external—that may affect the programs ability toachieve desiredoutcomes. New developments include such factors as: • Demographic changes in the population of your municipality; • Legislative changes (such as from the central government) affecting policy or funding that have recently occurred or are expected to occur soon; • Known expected major shifts, in or out, of businesses; and • New technological or medical advances, such as in malaria, tuberculosis, or HIV/AIDS prevention or treatment (which may enable the jurisdiction to improve its target values).Consider the outcomes achieved in the past for different customer or workload categories - and theprojected future mix. Since aggregate citywide targets are based, at least implicitly, on some assumption about the distribution of workload by "difficulty" category (see Steps 2 and 4), the program should explicitly estimate the percentage of customers expected to fall into each difficulty category in the next reporting period. Preferably, targets should initially be established for each outcome indicator for each different category of customer or workload. Then based on your projections of how many customers or workload are expected in each category an overall, aggregate target can be set. This should lead to more realistic and appropriate targets. (Having different targets for different categories will help reduce the temptation for program personnel to concentrate on easier-to-help customers or workload in order to show high performance.) For example, the values achievable for an indicator of success of training programs in helping disadvantaged citizens obtain jobs will likely be affected to a substantial extent by the literacy of the clients of the training program. If (a) the municipality is able to estimate the number of clients expected to fall into each of, say, three literacy levels, and (b) the performance measurement system has provided data for each of those three levels on the percent who got jobs the past year after completing the program, then it can use that information to help calculate a more accurate aggregate target.Consider benchmarking against the best. If the program has more than one unit that provides the same service for the same types of customers, consider using the performance level achieved by the most successful managerial unit as the target for all units. Alternatives are to set the target at, or close to, the average value achieved in the past for all the units. For example, you might choose as your 5-year target for your municipalitys infant morality rate the best value achieved by any country as reported in the latest World Health Organization report. The exhibit in Appendix E shows that the best known available value is 2 deaths per 1,000 live births (in Singapore). However, a municipality would more likely choose a considerably more reachable target value based only on countries with economic and environmental conditions similar to it. Your municipality should, of course, consider the latest value available on your city and develop annual targets that appear reachable to meet a five-year out target. 81
  • Make sure the targets chosen are feasible, given the programs budget and staffing plan for the year. For example, retaining a past target for the year despite a reduced budget can probably be achieved up to a point, but eventually substantial cutbacks in resources should be reflected in reduced targets for the agency. Thus, you should consider reviewing the targets after the final budget has been established, or even any time during the year when a major change in a programs situation has occurred.Set targets, not only for the year as a whole, but also for portions of the year, such as for each quarter,or even for each month for some performance indicators. This is a good management practice. Having such targets provides a basis for regular reviews throughout the year as to the progress agency programs are making in meeting their targets. This can encourage mid-year corrections that appear needed if significant shortfalls in meeting targets occur. Such targets should reflect seasonal factors that might affect performance and thus the targets for particular time periods for some performance indicators. For example, more job opportunities are likely to occur during tourist season. Thus, the "number of job placements" and "percent of persons served by the municipalitys job- training services who become employed" can be expected to increase during such times. Targets for these two performance indicators should be higher for tourist season months. Similarly, other seasonal differences in outcomes can be expected during months with normal weather differences and service demands (such as number of patients with seasonally related health conditions).If an outcome indicator is new, defer setting firm targets until the program has collected enough datato be confident of setting plausible values. However, a program might set rough targets for the initial data collection period, being explicit in labeling them as "pilot" targets.If appropriate, use a range for the target. While preferable, targets do not have to be a single value. A range is a reasonable alternative, especially if a great amount of uncertainty exists. For example, the target might be expressed as the most likely achievable value plus or minus 10 percentage points. Another option that might be appropriate for some indicators occurs where the performance indicator value is highly dependent on some external factor over which the municipality has no control. This may frequently be the case if the amount and timing of major funding from the central government is highly uncertain. You might identify two or three scenarios as to the amount and timing of funding that would be forthcoming. For each scenario, you would identify the performance targets you believe could be achieved with those funds. Who Should Set the Targets?Typically, the municipalitys program managers make the initial selection of target values for eachperformance indicator (preferably with the help of each managers staff). The program manager is normally theperson with most knowledge of what factors need consideration when setting the targets. However, a programs targetsshould be reviewed by higher-level managers to assure that the targets areappropriate and are compatible with higher-level concerns.An option sometimes used is to have higher-level officials and/or the public set the targets. These arepersons without much detailed knowledge of how the workings of the program or service. The advantage of this option isthat it incorporates the views of the "customers" for the service and is more likely to 82
  • establish targets that push the program to higher levels of achievement (even if program personnelparticipate in the group sessions). The disadvantage is that the targets these outsiders set might not be realistic and set theprogram up for failure. Ideally, it is preferable if a wide consensus in the municipality can be achieved and involve suchgroups as both citizen and business groups as well as agency managers and elected officials. At times, high-levelpolitical considerations will override the judgments of the program managers. This may lead in some cases to veryoptimistic targets (to improve the chances of success in a near future election before the actual values becomeavailable) or, in other cases, to very conservative targets (to improve chances of success in the next election after the actualvalues have beencalculated).An example of outside participation in setting targets is given in "Albania National Report on ProgressToward Achieving the Millennium Development Goals." It reported that the central government formed seven workinggroups that included government institutions, NGOs, and the private sector to "reach a national MDG consensus" on"Albania-relevant goals, targets and indicators." [Albania, 2004, page 3].This same approach can be considered for adaptation to the municipality level. CONNECT LOCAL TARGETS TO NATIONAL TARGETSWhen your agencies select targets for their performance indicators, they should consider any nationaltargets the central government may have set for any performance indicators that are similar to the agenciesindicators. The national targets can be considered another set of benchmarks you can use to setyour own targets.Preferably, the central government will have previously obtained your municipalitys input as part of itsselection of national targets. The central government should have considered targets set by its local governments aspart of its target-setting process. Ideally, the central government and its municipalities would work together in settingtargets for those performance indicators that are common to both levels of government. For example, infant mortality ratesare a shared responsibility of all levels of government. The central government should work with local agencies towork out strategies, resource needs,performance measurement and reporting procedures, and targets. RELATION OF MUNICIPALITY TARGETS TO MILLENNIUM DEVELOPMENT GOALS (MDGS) AND STRATEGIC PLANSFor each MDG Goal, the UN has established one or more outcome indicators, for each of which a targetmay have been set. -- see Appendix B table). However, MDG targets are not usually provided for the years leading to2015. Far out-year targets are important for strategic planning, but are much less usefulfor operations. Targets are needed for earlier years, and particular years in the near future.If you formulate a strategic plan for your city that integrates MDG goals, you should establish annualMDG targets. Strategic plans should also break down the far-out strategic targets into annual targets for the purpose ofannual performance budgeting and monitoring. The annual targets set during your strategic planning process should be usedin the annual budgeting process of your city so that you can link results with resources and demonstrate effectiveness oridentify a lack of thereof. (Performance budgeting isdiscussed in more detail in Step 7.) If your municipality is using any form of program, performance, or result-basedbudgeting, your annual budgets should already be including, as part of your budgetsubmission, projected targets for your performance indicators in the budget year.For those of your performance indicators that are similar to MDG indicators, it is good practice when 83
  • setting targets to consider the most recently available historical MDG data from individual countries aswell as the international MDG targets. Unfortunately, MDG values are not available at the present time for individualcities or other local governments. In addition, some of the available country-level data are somewhat old. Nevertheless,you can use some of the MDG data to help you develop your own targets. That data provides one set of benchmarks foryou. We have provided some samples of such data for yourown target setting in Appendix B.It is, of course, very important that you set your targets considering your own local situation. The MDGgoal-setters also recognize that each country, and presumably each municipality, needs to establish its own targetsreflecting their own unique situation. The basic philosophy is that targets should push themunicipality forward but should be reasonably attainable.Finally, perhaps the most important limitation of using the MDG indicators and their targets is that theycover only a portion, and probably only a small portion, of the issues and concerns that municipalities have. Asindicated in the sample list of outcome indicators in Appendix A, only a small percent are MDG indicators. However,the MDG broad goal statements can be interpreted to include the need for manymore such indicators as those included in Appendix A. CONCERNS ABOUT TARGETS AND TARGET SETTINGPublic officials understandably are often concerned that failure to meet targets can become threatening tothem, such as threatening their job security. This is especially of concern to public officials in situationswhere the reasons for failure by an agency to meet its key targets are due to factors outside the control of the agency. Thiscan lead to "game-playing" with targets, such as setting targets overly easy to achieve in order to be more likely to besuccessful meeting targets. Thus, public managers and their employees areconcerned that they may become political scapegoats if important targets are not met.One way to lessen this concern is to give agencies and their programs the opportunity to include in theirperformance reports their explanations for missed targets and to identify what they are doing, or planningto do, to remedy the problem.A second concern arises when circumstances change substantially during the year after targets have beenestablished, making it very difficult if not impossible to meet those targets. This problem can be lessened by permittingagencies to modify their targets during the year. This should be permitted if circumstances outside the responsibility of theprogram change so substantially that the program is not likely to be ableto come close to meeting the target. FINAL COMMENTSetting municipal targets for each performance indicator can be of considerable use to public managers,elected officials, and the public. Target-setting can be a highly useful management tool and encourage programimprovements, especially if sub-targets are set for each reporting period during the year, such asquarterly or monthly, depending on the particular indicator.For example, if a department surveys its customers only once a year, or if the value of a performanceindicator is not likely to change significantly on a monthly or quarterly basis, quarterly or monthly, reporting onthat particular performance indicator is not appropriate and only annual targets would be 84
  • needed. However, other indicators such as response times for providing a service, is likely to be subjectto short-term changes and so the data should be useful if collected more frequently such as monthly and quarterly. Then,quarterly or monthly targets should be useful to department management for trackingprogress and identifying the need for interim changes.Annual and long-range targets provide a roadmap for the jurisdictions and can be a powerful motivationaltool for the government and its managers for improving service outcomes.Long-range target-setting, such as called for in MDGs can be very helpful if the municipality has someform of multi-year strategic plan and the plan includes annual targets so that progress towards the plan can be tracked.This enables the municipality to identify the need to mid-course corrections wherenecessary.In addition, establishing targets for out-years, perhaps for five years into the future, can encourage long-range thinking by program personnel and reduce the temptation to over-emphasize current results at theexpense of future progress. 85
  • Step 7. USE PERFORMANCE INFORMATION TO IMPROVE SERVICES THE IMPORTANCE OF USING THE INFORMATIONPerformance measurement is of little value if nothing is done with the information it produces.Surprisingly, there are many examples around the world of performance measurement systems that produce data thatare seldom, if ever, used for improving services. The use of data should be central inthe performance management system from the start to make sure it is not wasted.The primary use of performance information around the world has been a way to provide accountabilityof governments to their citizens. This is very important. However, probably the most important use of performanceinformation is the role it can play in improving local services. It is clear that the data can be very informative to servicemanagers, but how can it actually be put to use in an active and systematicway? In this section, we suggest a number of important ways to use performance information. UNDERTAKE SERVICE IMPROVEMENT ACTION PLANS Service improvement action planning is a Desired Outcomes Can Cut Across straightforward process that incorporates Boundaries performance management tools into a A SIAP focuses on a limited number of framework to improve service outcomes. outcomes. But these do not need to be confinedThis process is essentially a focused version of the to one service, but can cut across several steps describedin this manual applied to a particular departments, for example, to address such broad service, program, or issue, andoften can produce issues as youth, the environment, health, orresults in measurable improvements in a fairly short economic development.time. Service Improvement Action Plans (SIAPs)have already been put into action in several cities in Eastern Europe, and have helped identify priorities and set upsystems leading to improvement. The use of SIAP introduces the discipline needed bymanagers to think through a problem and its solutions. However, SIAPs provide other advantages aswell, such as providing the foundation for program budgeting, long range financial planning, and strategicplanning. 86
  • Improving Citizen Satisfaction with CleanlinessSeveral different cities in Georgia undertook SIAPs focused on cleanliness. All resulted in measurableimprovements in cleanliness and in citizen satisfaction. In the city of Ozurgeti, for example, the numberof blocks receiving high ratings on cleanliness rose from 12% to 47% in the first year, while citizen satisfactionwith the cleaning service rose from 54% to 82% over the same period. This increase in satisfaction with serviceswas also accompanied by increases in fee collections - in one city the increasein collections was as high as 50% in the first year.The tasks required for completing a SIAP are described below. It is not necessary to carry out each stepof the process when first adopting this process. However, as city staff become more familiar with the SIAP they arelikely to find it to be a tool useful in many aspects of their job and expand their application. The tasks are laid out in thesequence that they usually occur, but in a number of cases it will be useful togo back to an earlier step to update information or rethink the direction the SIAP will be taking.Task 1. Identify the Focus Area for the SIAP. Many cities have focused on a traditional service area or department - for example, solid waste collection, or public lighting. However, others have found that the SIAP process is well suited to address a more Albanian City Uses Performance complex problem or issue that does not necessarily Management to Improve Street reside within one city department. Service areas that Cleaning cities in Russia, Albania, and Kyrgyzstan have chosen include: juvenile In 2004, the Municipality of Kavaja, Albania, delinquency, avian flu, tourism, economic identified street cleaning as an issue deserving development, tax and fee collection, and traditional attention. A Citizen Survey conducted with local holidays (community pride). The selection of funding from USAID revealed that only15% service areas may be based on feedback from of citizens surveyed viewed the city as clean surveys identifying citizen priorities, city council or very clean. The city opted for creating a input, as well as deliberations among city technical Service Improvement Action Plan in an staff and leadership. attempt to improve upon these results and truly achieve their desired outcome of aTask 2. Form a Working Group "cleancity". Try to form an inclusive group that encompasses key stakeholders. Start by identifying the individuals Kavaja established a local government-citizen who are interested in or involved in the area of working group whose task was to study the focus. There is no need to limit the working group problem, including analyzing performance to one department or even within the local information, set targets for improvement, government. For example, a working group devise actions for improvement and monitor addressing city cleanliness might include a health- annually. focused NGO as well as the public works and health departments and the contractor that collects solid Through a variety of actions, including waste. Think about including city council members, purchasing new garbage bins, approving a other departments, NGOS, other government new fee schedule, expanding garbage service, agencies, experts of other individuals for inclusion. and reallocating existing resources, in one There are two reasons for the scope of these groups: years time Kavaja was able to improve its (1) to be sure the group is aware of the many - often citizen satisfaction ratings with cleanliness by complicated - dimensions of the issue being 46% and within two years, by over 100%. addressed and to get the perspective and feedback of different stakeholders, and (2) to be sure the 87
  • Working Group will have the avenues, resources, and skills to implement the actions the group will be recommending.Task 3. Prepare a Situation/Issue Analysis Staff and other core group members should record what they think are the key issues or concerns. What appears to be the nature and causes of the problems identified? The latest available performance data should be examined to help identify the scope of the main issues the group thinks are important. This is not an in-depth analysis but an outline of the key directions and issues that should be addressed by the SIAP. Note that this is a preliminary step. Later, as more data become available, it is likely that the issues identified will change and the "situation" will look different.Task 4. Prepare a Table Presenting the Expected Level of Service, Outcomes (Results) and Indicators This table - see the example in Exhibit 7-1 - is the core of the SIAP. The Working Group should identify several desired outcomes, select the indicators to measure progress towards those outcomes, and fill in the latest set of available data for the outcomes, outputs, and inputs relating to the service. The past outcome data should also be broken out by demographic characteristics (such as district, age group, gender, etc. - see Step 2 for further discussion of breakouts). This will enable the working group to much better pinpoint key problem areas. All these data become the baseline values for the indicators. In some cases where data are not available, it may be necessary to obtain new data before the next steps, by using one or more of the data collection procedures discussed in Step 3. Exhibit 7-1 provides an example of a table with outcomes and outcome indicators associated with the aim of meeting one of the MDG targets. In this case, the city has identified four outcomes that the city can affect in order to contribute to ensured environmental sustainability, and have set targets for next year that they believe to be realistic. Exhibit 7-1 Outcomes and Outcome Indicators for a SIAP Linked to a Millennium Development Goal Goal 7: Ensure environmental sustainabilityTarget 10: Decrease the proportion of people without sustainable access to safe drinking water and basic sanitation.Target 11: Improvement in the lives of at least 100 million slum dwellers Desired Outcome Outcome Indicator Baseline Target Source values Next Year This Year % of citizens who have access to 28% 15% Survey water four hours or less per day % reporting that sometime in the 23% 10% Survey past 12 months the water had a Access to safe bad taste drinking water % non-revenue water - i.e., 48% 30% Water agency (volume of water supplied from records all sources (m3) - volume water billed )/ volume of water from all sources Access to basic % of citizens with access to 87% 50% Survey 88
  • sanitation improved sanitation % of citizens who report having 47% 25% Survey seen animals/small pests in the uncollected garbage Clean streets % streets that are rated as "clean" 24% 60% Trained observer or "very clean" (score 3 or 4) ratings % cost recovery for solid waste 63% 75% Municipal records collection Access to adequate % of low income families living 45% 60% Survey housing in adequate housingTask 5. Set Targets. For each indicator the working group should come up with targets. These should be for the next year, but the group may want to consider out-year targets as well. These targets may later change based on the particular course of action decided on. Moreover, the final selection of targets should probably involve input from other decision-makers, such as the Mayor or the city council.Task 6. Identify What Options are Available to Correct the Problem Actions may encompass one or more of the following types of actions. Usually more than one way to make improvements will be available. • Shifts in resource allocation within the service (e.g., bins are relocated from one zone to another) • The introduction of new technologies (for example, the installation of water meters or the acquisition of new solid waste collection equipment) • Public awareness campaigns to help reduce the size of the problem (for example, an anti-litter campaign) • Appeals to regional or federal government for regulatory changes • Additional staff or training • Steps to research best practices • Other analysis • Improved monitoringTask 7. Estimate the Benefits and Costs of Each Option Estimate the likely effects of each option on each of the outcome indicators. Also, estimate the costs of each option. Finally, consider the feasibility of implementation of the options. Are there technical, political, or other constraints that would make some options very difficult to implement? Then select the option that appears to have the best outcomes for the expenditures required.Taborsky, oblast of Perm, Russia, Considers Options to Improve Street LightingThe community identified street lighting as a critical area, with the greatly deteriorated systemcontributing to a number of other problems, such as difficulty traveling, doctors reluctant to make house calls after dark,the encouragement of petty crime. Based on the priority of maximizing coverage - with a target of illuminating 85% ofthe streets - they shifted resources from planned light meters and light sensitive photocell devices to installinglighting throughout the village. A volunteer team of highschoolers was formed to promote the maintenance of the new street lights and prevent youth vandalism. Identify in detail the expected cost of the selected option. These costs will usually include such items as wages, fuel for operating a machine or vehicle, materials used for repairs and maintenance, plus 89
  • any additional equipment or facilities that might be needed. If the planned actions will have budget implications, identify them. (Note: Not all service improvements will require budget increases!) Based on what appears achievable if the selected option is implemented, select final targets for each performance indicator. Depending on the problem being addressed, targets for more than one year into the future may be appropriate.Task 8. Develop an Action Plan for the Selected Option Identify responsibilities for implementing the selected option and identify dates by which a task will be completed. The action plan is a critical tool for measuring progress of the implementation of the Service Improvement Action Plan. Without assigning responsibility and deadlines action plans rarely get implemented. The information generated on the cost of inputs such as labor, materials, and equipment form the basis for budgeting. Because the SIAP method also usually requires multi-year planning, the expenditure estimates can be factored into a long-term financial plan for the city. Finally, because SIAPs require outcomes to be stated, they form the base for a practical strategic plan, which provides local leaders a road map for the future and the costs to traverse the road map. Overall, municipalities are likely to find that SIAPs become an integral part of their planning, management, and budgeting activities.Poti, Georgia, Increases Collection of Water TariffsIn the city of Poti, Georgia, a working group was formed including three people form the water company,one from the Sakrebulo, one from a local NGO, and one from the local television station. The SIAP addressedsuch issues as improving billing procedures, signing service contracts between the service provider and households,and installing metering in some neighborhoods. One result was an increase incollections from 18% to 29% within the same calendar year.Task 9. Monitor, Report, and Implement Once the action plan is put into motion it is important to track progress. This is where the performance indicators come into play. The Departments will need to be sure that data collection proceeds in a timely manner so that at appropriate intervals - at least once a year - information will be available on performance. Results should be reported to city managers and leadership, as well as to the staff involved. Reporting to citizens should take place on a regular basis, along with information on local government plans to address any particular needs or problems. Managers can use the performance information as they receive it to make needed changes. This cycle of action, monitoring, reporting, adjusting, and action will continue over time. Action plans should be reviewed on an annual basis; ideally they should be reviewed shortly before the preparation of the annual budget in order to incorporate changes in cost of service delivery and additional expenditures into the next budget cycle. Further, review at this time allows the department director, the municipal manager and the Council the ability to inform the citizenry on how service delivery has improved and how resources will be used to continue the upward trajectory in service improvements. Each city will find its own best way of carrying out this process. Cities may not be able to carry out all these steps perfectly from the start, but it is likely to be well worth the effort. In Exhibit 7-2 below 90
  • is one example, based on the "work sheet" completed by a real city carrying out a SIAP on solid waste collection. Local governments have found it useful to use this form throughout the process. Exhibit 7-2 Sample Worksheet for Selected Tasks in preparing a Service Improvement PlanWorking Group - list membersSituation Analysis -- Describe the current service, including delivery mechanisms, potential problem areas, andideas about possible improvements.Identifyimportant outcomesandoutcome indicators.Identify data collection source and, where possible, provide baseline data Desired Outcome Outcome Indicator Baseline Target Source values2007 2008 Streets are clean % of citizens who report having 47% 25% Survey seen animals/small pests in the uncollected garbage % streets that are rated as "clean" 24% 60% Trained or "very clean" (score 3 or 4) observer ratings % of citizens who say they do not 87% 50% Survey know the garbage collection schedule Service is financially % cost recovery for solid waste 54% 75% Municipal sustainable collection recordsIdentifyTaskstobe completedbyworkinggroup1. Look at breakouts of citizen satisfaction data to identify which neighborhoods have most complaints aboutcleanliness.2. Verify placement of garbage bins in the priority neighborhoods to determine if it might be useful to reallocate.3. Plan an information campaign for citizens to educate them about the garbage collection schedule and encouragethem to deposit garbage into bins4. Look into practices of other municipalities on fee/fine collection procedures. Review legislation concerninggarbage fees.5. Review billing and collections procedures and consider areas for streamlining.Priority Actions, Timing, and Responsibility. Drawing on the tasks outlined in the previous section, list majoractions, a deadline, and staff assignments. WHAT BY WHEN WHO ANALYZE OPTIONS/ESTABLISH PRIORITIESA major task for public service organizations is to decide among options and establish priorities amongcompeting claims for scarce resources. No organization can do everything it would like to do. This applies both tooperational resource allocation systems and for making longer term choices, such as in 91
  • strategic planning and developing a multi-year capital investment program. Information from theorganizations performance measurement system can usually provide important information to help makethese choices.The following steps can be used for such analysis. Here we illustrate how an agency might address thechoice problem, focusing on choices involving the construction, repair, and maintenance of physical infrastructure,such as roads, bridges, water supply, sewer systems, and buildings or other facilities (such as parks and other publicrecreational areas). Repair and maintenance of school buildings will be used toillustrate the steps leading up to final choices.*Step 1. Assess the condition of each existing element of infrastructure (intermediate outcomes) -- usinginformation from the agencys performance measurement system. In the school example, trained observers might rate each sub-system of each school building in the community using a well-defined rating scale, such as "acceptable," "not acceptable," or "hazardous."Step 2. Estimate the cost to bring each infrastructure element to an acceptable condition. Exhibit 7-3 illustrates the summary table from these two steps. In this example the cost estimates are those needed to bring the condition levels from "hazardous" or "unacceptable" to "acceptable."Step 3. Compare the total cost to the available resources. Inevitably, the total costs will be considerablygreater than the need. In the example, which schools and which infrastructure elements should be repaired? Also,consider the various sources of funding and which funds can be used for which activities. Capital costs often come from adifferent fund than operation and maintenance (O&M) expenditures. The availability of funding is likely to differbetween the two sources. In the school example, some of the large cost repairs are likely to be considered to be capitalexpenditures, with funds that might be moreavailable than other needed repairs that are O&M activities.Step 4. Consider other important factors. For example, the severity of the unacceptable condition level islikely to differ among the infrastructure elements. A natural choice would be to fix hazardous conditionsfirst (what can be called the "worst first" option). This might require much of the available resources, as is the case in thecost numbers in Exhibit 7-3 (45% of the total need and probably a much larger percentof the total dollars available).Step 5. Estimate the number of persons (in the school example, the number of students plus school staff)affected adversely by each unacceptable condition. The agency might then calculate the ratio "number of persons benefited(by bringing the infrastructure element up to an acceptable condition level) perestimated repair dollar." These ratios provide one useful perspective for prioritizing the repairs.Step 6. If funds are very tight (as usually will be the case), consider interim, as well as complete, "fixes."For example, if some needed work, has particularly large costs, i can a temporary, partial, fix be used until more fundsbecome available? If only a few persons are affected by particularly costly repairs, are there other ways those persons(the students and staff in the example) can be served using less costlymeans?* This example is adapted from work done by the Urban Institute with several towns in Albania. The local andcentral governments both participated in this process. 92
  • None of the above steps directly addresses "political" considerations. These will often affect resourceallocation decisions. The use of such data as that identified above can help reduce such pressures onpublic officials, such as the temptation to spread the available funding around regardless of need.Note also that performance measurement information provides only part, but a vital part, of theinformation needed for these resource allocation decisions. HOLD HOW ARE WE DOING? SESSIONSCities can hold sessions periodically to review performance information internally with staff. This is anexcellent way to focus staff attention on the importance and use of performance information. The Mayor may want topreside at these sessions, or they could be internal staff meetings within a single department.As a first step, identify particular indicators to review on a regular basis. It is most useful if printed copies listingthe relevant indicators with up-to-date data are available at each session. In a number of instances, cities have chosen toproject indicator data on a screen, so that everyone is looking at the samenumbers during the discussion. ILLUSTRATIVE QUESTIONS FOR HOW ARE WE DOING? SESSIONS • For which indicators have we met or exceeded our targets? • Are there some lessons learned from those successes that might be useful elsewhere? • For which indicators did we fail to meet our targets? • Why? And how could we improve performance? (Request a written performance improvement plan.) • Are there other unexpected outcomes that should be discussed? • In later meetings, request an update on progress in carrying out the improvement plan and meeting new targets.Ask relevant staff to provide updates on performance and explain any deviation from targets. This is alsoa good opportunity to review performance information at breakouts of the data for particular areas of the city or theoutcomes for particular citizen groups to see if one area is experiencing problems and might benefit from moreassistance, or if one area is doing exceptionally well and might be used as a model for others. These should be sessionsintended to improve service, not to serve as an occasion for attacking poor performance. These sessions can be highlyconstructive. Having the performance information in view will help the discussion to be focused and creative. Severalcities in the United States (perhaps most famously, "CitiStat" in Baltimore and New York City) and elsewhere use thistechnique on a regularbasis. In Baltimores CitiStat system, department representatives answer the Mayors questions regarding performance at bi-weekly meetings, while indicator data are projected on screens behind them. 93
  • Exhibit 7-3. Cost Estimates for Bringing Each School Element up to an "Acceptable" Level*No. Description Hazardous Fire Lighting Temperature Water Bathroo Sanitation Communications Total Conditions Protection Supply ms 1 School A 1,710,000 45,000 16,450 341,000 - 18,000 96,500 27,000 2 ,253,950 2 School B 25,000 90,000 5,300 457,400 49,000 59,300 20,500 81,000 787,500 3 School C 45000 45,000 2 ,600 91,000 28,000 39,000 18,500 27,000 296,100 4 School D 1,000,000 45,000 1,850 80,000 20,000 15,000 11,900 27,000 1,200,750 5 School E - 45,000 800 37,500 2 ,250 11,000 8,200 27,000 131,750 6 School F - 45,000 7,000 150,000 25,000 15,000 10,100 27,000 279,100 7 School G - 45,000 1,160 8,000 25,000 15,000 8,000 27,000 129,160 8 School H - 45,000 1,950 134,200 30,000 22,400 20,700 27,000 281,250 9 School I - 45,000 9,500 199,000 27,160 15,000 6,500 27,000 329,160 10 School J - 45,000 4,000 123,600 35,000 63,220 11,800 27,000 309,620 11 School K - 45,000 1,900 79,500 17,000 11,000 8,800 27,000 190,200 TOTAL 2 ,780,000 540,000 52,510 1,701,200 258,410 283,920 221,500 351,000 6,188,540* Adapted from work done by one Albanian community. 94
  • Indjija and Paracin (Serbia) CitiStatsBoth Serbian cities have introduced a CitiStat process: Indjija in 2003 (called Sistem48) , andParacin in 2004 (Called InfoStat). In both cities, the mayor meets weekly with key staffs from the variousmunicipal departments to review department reports on a variety of pre-selected indicators. All municipalservice providers (including both city departments and municipalenterprises) attend bi-weekly meetings with the Mayor, where they report on predetermined indicators. Ateach session, indicator data are reviewed for each service for both outcome indicators (such as times takento complete responses to citizen complaints and total parking fees collected) and output indicators (such as thenumber of work orders completed, number of trees trimmed, and number of square meters of road swept). Themeetings in Indjija are open to themedia.Both cities have identified a number of improvements to their operations the cities attribute tothese meetings. These include in Indjija: clean-up of illegal dumping, fixing sewage problemscaused by precipitation, and replacement of traffic lights to reduce breakdowns in Indjija. Paracin attributes tothese meetings such improvements as: increased attendance at cultural events due to improved communicationwith citizens (identified by a citizen survey); and major cost savings inheating expenses in kindergartens (based on comparisons of the costs in individual schools).Source: "DAI/SLGRP CitiStat Implementation," Development Alternatives, Inc., working paper, undatedbut probably 2006. PERFORMANCE BUDGETING"Howcanyougetthereifyoudontknowwhereyouaregoing?"Using performance information for budgeting is known as performance-based budgeting or"results-based budgeting" and is probably the best known potential use of performance information.While performance measurement "looks backward" to see what has been accomplished, performancebudgeting can be thought of as looking forward. This presentsadditional challenges.Performance-based budgeting has several strong advantages. It improves decision-making. Cityadministrators can use performance information to think more carefully about the choices implicit in the budgetthey prepare. And the city council receives better information about the implications - the outcomes -that their decisions will have, and are therefore better able to makeeffective choices.Second, it can lead to better resource allocation. Developing a results-based focus towa4rdsbudgeting enables you to monitor a better understand the accomplishments of various programsand what you are getting for the resources allocated to that program. While some local governmentsmay have limited authority to reallocate funds across different departments, there is always some scope to betterallocate limited resources within the department. A key element inperformance-based budgeting is that linking program objectives and accomplishments to budgetrequest justification for the next year. 95
  • It also provides accountability in an especially important area: spending taxpayers money in away that explicitly links funding to the outcomes it is expected to lead to. Including outcomes withexpenditures greatly enhances the transparency of the budget. Citizens - and councilors -have found budgets much easier to understand when they include performance information.Governments at all levels in many countries have begun to use "performance-based budgeting."In actuality, this has taken many forms, across a fairly large range of possibilities. Each cityshould consider doing some or all of the following: • List objectives in the budget for each service area. This will require departments to think carefully about the purpose of their work, and to present the proposed budget figures in connection with those purposes for everyone to consider. • List outputs, outcomes and indicators based on service objectives in the budget. It will be easier to include outputs at first, as these are much more directly linked to planned expenditures. However, identifying the relevant outcomes will make the budget figures considerably more meaningful. Including data from the last years will describe the current context, and will provide council members with information on which to judge how the department will perform in the future. • Include targets for key outcome indicators, linking outcome targets to estimated expenditure. These will describe most clearly how the funding being requested is expected to be used. • Use outcome indicators and targets to prepare the budget. The information should play an important role in an organizations budget choices. For example, different outcome targets be considered relative to the funding levels requested. • Use explanations provided about past performance levels to help justify proposed budget changes. • The City Council uses performance information to make appropriation decisions. Although it is likely to be a significant change for council members used to seeing a budget in traditional format, the Council will find the inclusion of indicator data - especially outcomes - invaluable. On the one hand, it will make decisions clearer, once the implications in terms of results are included in the budget. On the other, it will be easier for them to explain to citizens why they made the decisions they did.Exhibit 7-4 provides a list of questions that department heads, the mayor, or council members canask about when developing or considering budget requests in order to consider the performanceimplications of the proposed budget. Exhibit 7-4 Basic Questions to Ask During The Budget Process1. What are the key results that should be expected from your department or service?2. Who is your service intended to serve? Who else is affected by your program?3. What important performance indicators do you use to track progress in attaining these results? If none at the moment, what would make sense to use?4. What do these performance indicators show for the past several years?5. What are the expected values for these performance indicators you expect to accomplish with the budget you propose?6. To what extent have you met your most recent targets? For targets that were not achieved, why were those targets missed? What does this latest budget do to correct the problems? 96
  • 7. What actions does your proposed budget include that will improve the quality of your services for our citizens?8. Where you have proposed efficiency (cost-saving improvements, what effects will they have on the quality and effectiveness of the service?9. What major factors influence the results you are trying to achieve? What are you doing to try to address those factors?10. What are the major challenges facing your program/service?11. How would results change if funding is increased by 10 percent? Decreased by 10 percent?Performance based budgeting presents some problems, in particular in estimating the costs ofachieving various levels of outcomes. A few tips might help the process: 1. Start with historical costs from the last few years. 2. Think about changes expected in: a. resources available b. the complexity of the task c. staffing available d. external factors, such as the weather, new regulations, or population e. policy changes 3. Do not expect absolute precision. It may be useful to plan to adjust targets during the year, depending on performance in meeting semi-annual or quarterly targets. 4. Consider setting variable targets that explicitly depend on different factors is a way of formalizing the uncertainty. Some options are: a. Setting different targets depending on expected future workload Indicator: Number of hours it takes to complete a service call Target: 24 hours, if no more than 50 work orders per week come in during the budget year 48 hours, if over 50 work orders per week come in during the budget year b. Setting a range, rather than a single value for the target, such as: Indicator: Percent of roads to be asphalted Target: 40-50% depending on future weather conditionsExhibit 7-5 is an excerpt from a performance-based budget prepared by the city of Fier, inAlbania. 97
  • Exhibit 7-5 Example of Performance-Based Budgeting from Fier, Albania Excerpt from the Parks and Greenery section of the 2007 BudgetPerformance strategic objective a.1: Citizens satisfied with the greenery in the city and neighborhood Annual performance goal a.1.1 5% increase of citizens satisfaction with the greenery in the city and neighborhood Annual performance goal a.1.2 5% increase of the number of citizens who use the parks of the city and neighborhoods Activities: - Increase the numbers of parks from 86 in 2006 to 90 in 2007 - Plant trees and flowersPerformance strategic objective a.2: Citizens satisfied with parks Annual performance goal a.2.1 5% increase of rating the cleaning, safety, maintenance, and lighting in parks "very good" and "good" Activities: - Supply six parks with garbage containers and lighting - Supply five parks with benches - Maintain the existing parks - Add two employees to the greenery service Outcome Indicators for the Greenery Service Baseline Target 2006 2007 % of citizens satisfied with the quality of parks and green areas in the city 19.1% 25% % of citizens who use parks and green areas in the city and neighborhoods 27.3% 33% % of citizens who rate the cleanliness of parks / green areas very good or good 52.5% 57% % of citizens who rate the safety of parks / green areas very good or good 56.8% 62% % of citizens who rate very good or good the maintenance of grass and trees in 43.2% 45% parks / green areas % of citizens who rate very good or good the benches and tables in parks and 33.9% 40% green areas % of citizens who rate very good or good the lighting in parks and green areas 27.9% 32% % of citizens who rate the works of art in parks / green areas very good or good 18% 22%Budget allocated to the Greenery Service and the service enterprise (in thousand leks) No. Item 2004 2005 2006 2007 1. Service enterprise 43 906 52 152 61 374 70 400 2 "Flores & Co." Company 7062 5569 3452 4056 3. Investments 0 0 0 10 072 Total 50968 57721 64826 84528 98
  • CAPITAL BUDGETINGMany capital expenditures such as those for road, water, and sewerage rehabilitation, are intendedto provide improved public services. These capital expenditure decisions should be made basedin part on estimates of the extent of improvement in services that would result.Using outcome information for selecting, and later justifying, capital projects can help: • Determine how proposed capital projects relate to the long-term capital plan. • Identify how proposed capital projects support major goals and objectives in the strategic plan. • Provide information to the public about how an investment will benefit them in the future • Hold agencies more accountable for the results they achieve with capital projects they propose by following up on expected results after they are completed.Useful steps include: • Use the latest performance information to determine to what extent a capital facility is needed. For example, information on response times for emergency vehicles can be used to determine whether more vehicles need to be purchased. • For capital projects intended to directly provide services, provide annual estimates of changes in results for that service expected after the project is completed. This will enable the city to assess the value to the municipality of each project. • Hold public discussion sessions with citizens to obtain their opinions on the projects being proposed including the results they would expect from those projects. • When preparing public service capital investment improvement proposals, estimate the expected outcome values for each relevant outcome indicator. Exhibit 7-6 provides an example. Exhibit 7-6 Example of Outcome Information for a Capital Project During the budget year, the proposed project will complete much needed reconstruction and signalization of Center Street from X to Y. This work will speed up traffic and reduce congestion in this area. We expect that peak hour driving time from one end of the work to the other will improve from its current average of 32 minutes to approximately 19 minutes. In addition it is expected to reduce traffic accidents by about one-half, from the 213 accidents over the past 12 months. • When seeking citizen approval of a capital project, include information on expected outcomes. This information can be very useful in justifying the capital expenditures to citizens. 99
  • STRATEGIC PLANNINGIf the municipality has a strategic plan in place, it is important to link performance measurementefforts to the plans strategic goals and objectives - or, conversely, to modify those objectives if they areinconsistent with the outcomes selected as city priorities. Performance monitoring - including theestablishment of specific targets and monitoring compliance with those targets -- ofkey strategic objectives is an essential part of ensuring that the city meets the objectives.The performance measurement process has the following roles in strategic planning: • Provides the baseline data for the plan • Provides historical information for estimating the likely future outcomes for various service options examined in the strategic planning process • Provides the data for tracking progress towards the plan, thus indicating whether mid- course changes are neededThere are direct links between performance management and strategic management. Strategicplanning harnesses your citys strengths and anticipates and plans for its future. It involves specifying avision for the community, defining a strategy (objectives) to achieve the vision, and finally concrete tools(programs) to achieve the objectives. Performance management evaluates the performance of variousprograms to see if objectives or targets are being achieved. It also involves reporting and using performanceinformation to make your citys program moreeffective.Strategic Planning is a major topic itself and is not covered in detail in this Guide. MOTIVATE YOUR EMPLOYEESPerformance data can be very useful for motivating municipality employees. Typically, the mostfeasible methods are those that rely primarily on recognition (rather than financial incentives) Performance-based motivation can be especially successful if it is focused on teams - not justindividuals - reinforcing outcome-orientation and rewarding innovation and success.Linking salary to performance is an appealing concept but has many difficulties, such asdifficulties in objectively assessing an individuals contribution to outcomes without makingvalue judgments. It is, of course, also essential to ensure there is no incentive to reduce quality ofperformance by focusing only on the outputs that are being measured. In addition, individualmonetary rewards can easily build resentments if they are not perceived as being completely fair.More feasible financial incentives are likely to be financial rewards for a whole team ordepartment for meeting specific performance targets; or rewards in the form of additional funding for for trainingor professional development, or more flexibility and less oversight in some oftheir activities.Some specific incentives include the following:Non-monetary Incentives— Using recognition awards 100
  • — Provide access to training or study tours abroad for staff or departments meeting performance targets— Providing regular performance reports to all program personnel (this can be done by posting results on a bulletin board, for example, and can include breakdowns by region or by customer groups)— Setting performance targets and regularly reviewing achievements in relation to targets (especially effective for shorter reporting periods)— Giving managers more flexibility in exchange for more accountability for performance— Making performance information an explicit part of the agencys individual performance appraisal process (all persons in a group would receive the same rating on this part of the appraisal)Monetary Incentives— Linking pay to performance (note the difficulties described above, as well as the fact that external factors can greatly affect outcomes)— Allocating discretionary funds to agencies for programs with high performance (such as providing extra resources for classroom equipment for a high-performing teacher, or returning a part of cost-savings to the programs budget) PERFORMANCE CONTRACTINGIf you contract out services (such as street cleaning, solid waste collection, street lighting, androad maintenance) to private service providers, consider using performance contracting. If you provide grantsto non-government organizations for services (such as a variety of social services), consider specifyingperformance targets in the grant. IN either case, agreements between the two parties should include outcome targetsso that outcomes can be compared against the targets. The outcome indicators should be included in requestsfor proposals (RFPs), along with desired targets for each indicator. Organizations that indicate they canproduce higher levels of outcomescan be given higher ratings during proposal considerations.A combination of rewards and penalties can be included in these agreements, such as: ⎯ Increased fees for meeting or exceeding targets ⎯ Reduced fees for failing to meet targetsMany service contracts include termination In the city of Ozurgeti, Georgia, indicatorsoptions for poor or nonperformance, but these on street cleanliness were included in thegenerally apply to extreme performance-based contract signed with the firstcircumstances⎯usually only vaguely private-sector solid waste collection contractordefined⎯and do not appear to provide much to provide communal services for the city.incentive for improving performance. Trained observer ratings were used to monitor the companysA private organizations performance on previous performance.contracts or grants might be considered as anexplicit criterion for future awards.Outcome-based performance contracting - with clearly defined performance measures - can beattractive to both the government and contractors if the contract also gives the contractor greaterflexibility in how the work is performed as long as the outcome targets are met. 101
  • Exhibit 7-7 lists a number of key questions that should be considered when establishing outcome-based performance contracting. The following elements are important to a successful contractingprocess: • Incentive provisions need to be fair both to the public and to the contractor or grantee when including monetary bonuses or penalties in contracts or grants. • Strong contract oversight needs to be maintained to make the contract more effective by either collecting the performance data yourself or regularly checking the quality of performance data provided by the contractor or grantee. • Encouragement and help to grantees and contractors is likely to be needed to maintain their own outcome measurement processes. Contractors and grantees should be required to allow the government to undertake periodic quality control audits of the data systems. • Post-service measurement will be needed before final payments are made for services where important outcomes cannot be assessed for a considerable period of time after the contractors services have been completed. Exhibit 7-7. Outcome-Based Performance Contracting Questions • To what extent should contractors be involved in the selection of performance indicators? • What incentives should be included in the contract? • How should the size of penalties and positive incentives be related to outcomes? • Should initial performance contracts include a hold-harmless clause? (This refers to a period of time in which the contracting entity holds off any adverse action against the contractor or grantee so that they have some time to get on their feet). • What role should the contractor play in data collection? • How should external factors be considered when determining rewards and sanctions (e.g., escape clauses?) • How can outcome incentives be used to encourage contractor innovation? • Should performance on past contracts help determine future contract awards?Source: "Performance Measurement: Getting Results," Second edition 2006. Washington, D.C.: The Urban Institute. CONTRIBUTE TO NATIONAL AND REGIONAL INFORMATION SOURCESPerformance information collected at the local level can make a major contribution to the regionaland national levels of government who also need to assess the progress of their programs and objectives. Theyface many challenges in getting detailed information of outcomes that can help them select the best policies. Insome cases they adopt overarching development plans with their own monitoring systems. Most nationalgovernments believe monitoring progress is a valuable tool, but there are often fairly serious data gaps,however, and countries are still working on improving their capacity to collect data. While country-leveldata are the simplest for national planning, often the findings reveal profound difference across the country oreven the region. Forthose purposes, national or regional government plans could very much benefit from local data.A number of countries have adopted national plans to meet Millennium Development Goals, inorder to concentrate develop policies and resources on a few selected priorities. Many countries 102
  • also have "Poverty Reduction Strategies" that are required by multi-lateral donors such as theWorld Bank. These Strategies include monitoring and evaluation components. The indicators they useoften are based on - or include - MDG Indicators. For instance, Albania has built the National Strategy forSocioeconomic Development (their Poverty Reduction Strategy) around the achievement of the MillenniumDevelopment Goals. Many of the MDG indicators appear in the NSSEDs "main objectives", and others areregularly updated in NSSED progress reports. Exhibit 7-8 shows a table from the 2004 Progress Report forAlbanias NSSED, providing actualdata and target values for MDG Indicators.In Albania, as in many other countries tracking progress toward MDGs, it is clear that there areinequities across the country, with some regions doing much better than others, or urban areas outpacing ruralareas. In these cases, local data can be especially useful, and international donors and governments alikerecognize the role that local governments will need to play in meetingthose goals.Performance information collected by local governments can be very helpful to the nationalgovernment in tracking progress toward meeting national targets, and can also suggestrefinements to the indicators that are being monitored at the national level FINAL COMMENT ON USING PERFORMANCE INFORMATIONUltimately, all the above uses of performance information are intended to improve thegovernments services and their benefits to citizens and the community as a whole - and toprovide better accountability of the government to its citizens. 103
  • Exhibit 7-8MDG From Albanias NSSED Progress Report 104
  • Step 8. BUILD MUNICIPAL CAPACITYThe key to building capacity for performance measurement and performance management formost municipalities is likely to be training. In addition, some outside technical assistance islikely to be needed. Below, suggestions on these are presented. • Municipalities should request funding from the national government and donor organizations to build their own capacity so as to better contribute to the MDGs. • Donors need to recognize that building the capacity of municipalities will create synergies between indicators regularly tracked by the municipalities and the MDG indicators. 105
  • DECIDE WHAT TRAINING IS REQUIRED , TO WHOM, AND HOW MUCHAdopting performancemanagement requires staff SPECIAL NOTE TO LOCAL GOVERNMENTStraining to provide the specialskills and understanding of This chapter provides a fairly comprehensive list of the kinds ofperformance measurement and capacity building that would be useful for local governmentsperformance management undertaking performance management. It is not likely that any oneconcepts and procedures. This local government will need the full range of training described in this section. Moreover, it will be difficult for most local governments toincludes training in both how to obtain funding for such a complete program. It is practical to think of it inundertake performance terms of a progression. While most cities will decide their own way ofmeasurement (the technical side) and proceeding, we provide in thishow to use the information box some suggestions of how these items might be prioritized.obtained ("managerial" training). Most essentialTechnical training • Key concepts - outcomes versus outputs, the importance ofManagers and professional staffs monitoring performancewithin both national and line • Data collection - choosing sources that are appropriate andagencies (including those in feasibleadministrative offices such as • Using performance information - how can this informationfinance, procurement, and be used in the short term to improve resultspersonnel offices) will need some With additional resources:degree of technical training. Thisincludes exposure to such • TA to assist in each of the above steps • Special topics on data collectionsubjects as: • Data analysis - how to look at data in greater detail and with more accuracy• Awareness of the MDGs and • Data quality how municipal performance management can contribute to achieving MDG targets.• The distinctions between indicators of inputs, outputs, efficiency, intermediate outcomes, and end outcomes⎯including use of such tools as outcome sequence charts (logic models) and focus groups (groups of both staff and service clients) to obtain information on how to identify appropriate performance indicators.• Ways to identify needed outcome indicators for a service, including the use of focus groups and using outcome sequence charts to illustrate the relationship between outputs, intermediate outcomes and end outcomes.• Ways to measure performance, including data sources and data collection procedures, such as using agency records, surveys (household and user surveys), and trained observer ratings. This includes the basics of surveying service clients, with a brief exposure to sampling concepts to enable agencies to avoid the need and added costs, where appropriate, for surveying large numbers of clients. This also includes the elements, and applications, of "trained observer" approaches in which personnel are trained to make systematic ratings of physical conditions as a way to measure various aspects of service quality.• The topics of breaking out outcome data by key customer and service characteristics, obtaining baseline data, and setting targets. 106
  • • Analysis of the performance data. This includes such topics as: selecting the appropriate comparisons to help interpret how well performance has been; and searching for explanations as to why unusual results (unexpectedly high or low) have occurred.Training in using the performance dataOften badly overlooked in many, if not most, governments throughout the world is trainingmanagers on using the performance information to make program improvements⎯and not onlyto satisfy higher level requirements on reporting performance. This neglect has meant that managers andofficials tend to look on performance measurement as primarily something to get done in order to satisfyhigher-level authorities. The important need is to transition managers from performance measurement toperformance management. This means training in such elements as the following, each of which canconsiderably increase the value of data to managersand their staffs:• Reviewing performance data that identify and compare performance achieved for key citizen demographic groups the service serves⎯such as age, gender, income, ethnic groups and citizens living in the various neighborhoods/sections of the municipality.• Reviewing performance data that identify and compare performance achieved for key service characteristics⎯such as comparisons among individual organizational units providing similar services and comparisons of the outcomes of different service delivery mechanisms where different amounts and/or types of a service are delivered to citizens.• Reviewing performance data on service quality⎯such as obtaining information on the timeliness, helpfulness, courteousness, and accessibility of the services.• Regular and reasonably frequent collection and reporting of the performance data. Typically higher-level officials examine performance data on an annual basis, typically during the budget/appropriation process. However, annual reporting is not likely to be adequate for managerial purposes. For some services, reporting performance data at least quarterly is likely to be desirable for managers and their staff to track what is happening. This added frequency will make it more feasible for managers to make mid-course corrections and then have the opportunity to determine in later quarters whether or not changes that were introduced have led to desired improvements in performance.• Holding regular managerial performance reviews with staff after each performance report has been issued. Such reviews can be used to identify where performance is on target and where not, to identify possible explanations for unusually good or unusually disappointing service levels; and to suggest ways to improve future performance (such as by identifying practices that have been particularly successful for some organization units or for some client groups that might be transferred to other units or other client groups).• Undertaking searches for explanations for performance that is unexpectedly bad or good. A search for explanations should be an explicit part of the performance management process. This topic has been discussed in more detail in Step 4: Analyzing Performance Data.• Having data that becomes available and is processed in a reasonably and timely way, such as within no more than one month after the end of a reporting period. Data that takes years, or 107
  • even months, to obtain will lessen the ability of managers to react to that data and to make improvements.Several levels of training are required⎯for elected officials, for department heads, managers andsupervisors, and for many, if not most, technical and support staffs. It can be argued that everyone in the government should have exposure to the concepts of citizen-focused services, a centralconcept of performance management.Performance measurement training appears to work best when it includes small group exercises,where participants can work on a specific service and go through the process of actually identifyingoutcomes and indicators, data collection procedures, and uses of the performance information. Theseexercises will help participants use the theoretical concepts of the training inpractical real-life scenarios, thereby making the training less abstract and more effective. Who should be trained? All municipality managers and supervisors should eventually be provided training, including administration managers and not only direct line operating managers. In addition most municipal staff should also receive some training. Ideally, all staff would be provided at least brief training to encourage all public employees to work to produce the best possible results for the citizens of the municipality. If your municipality already has made a significant start on performance management less training will be needed. Initially, the training will need to focus on those managers and staff who will be involved in the performance measurement effort. Because of inevitable staff turnover, provision also needs to be made for the training of new municipal employees, both mangers and staffs. How much training is needed? Some municipality personnel will only require perhaps a two-hour introduction to performance measurement, especially to encourage a results-orientation. However, those who will be responsible for implementation of the performance measurement process will likely need about two to three days of initial training. It is important that training is done not only at the start of the Performance Management Strategy, but also on an ongoing regular basis to reconfirm techniques and learn new approaches. Training will also be required for new staff joining the departments. What technical assistance is likely to be needed? Hands-on, outside, technical assistance can be very helpful in developing and implementing a performance measurement and performance management process. Training will likely have considerably more impact if accompanied by assistance, especially on technical issues that arise. The technical assistance might come from external consultants, local universities, or even internal staff who have had experience and knowledge of particular steps in the process. For example, some municipalities have planning or statistical staffs who have had considerable experience in undertaking surveys. These persons may be available to help your individual agencies with surveys of their customers. In addition, some or your agencies are likely to have staff who already have experience in performance data collection and analysis, such as staff in health, education, and transportation agencies. 108
  • Who can provide the training? Training, while highly desirable can put a strain on the resources of a government. Of course, not all the training needs to be done at once but can take place over many months or years. Much training ultimately can be done by internal municipality staff who have had obtained previous experience and knowledge in performance measurement and performance management issues. As with technical assistance, some agencies in your municipality are likely to have staff who already have experience in performance data collection and analysis, such as staff in health, education, and transportation agencies. These persons and their experience might be drawn on for training. One strategy, that has been used, is to use persons that have completed the training to help train others. In the long run, providing adequate training is the responsibility of each agency itself. An agency can choose to use outside consultants or inside the government personnel, or some combination. Increasingly, written materials are becoming available that can be used in the training. For example, one of the purposes of this manual itself is for use in such training programs. You may need to translate some or all of the material available into the local language to enable effective training of line staff. How much is such training (and technical assistance) likely to cost? This is a critical issue. Overall, the cost of training and technical assistance need not be large. As noted above, the primary out-of-pocket cost is likely to occur at initial start-up of the performance measurement process when outside assistance is likely to be needed. Later your municipality is likely to be able to use internal personnel who gained experience during the initial stages of implementation to provide later training and technical assistance. Early start-up training and technical assistance might be available, if needed, from your national government or from international donors. FINAL WORDSImplementing a useful performance management process is not easy. Numerous technical andpolitical issues need to be addressed, such as those discussed throughout this guide. The key for yourmunicipality is to place greater focus on outcomes of importance to your citizens, including the internationalMillennium Development Goals and other outcomes relating to the quality oflife of your citizens.The resulting ability of your municipality to track key outcomes of vital importance to yourcitizens, then using that information to improve the quality and effectiveness of your services tothe public, is what is likely to make all this effort well worth it. 109
  • BIBLIOGRAPHYAlbania National Report on Progress Towards Achieving the Millennium Development Goals,August 2004.Analyzing Outcome Information: Getting the Most from Data. Washington, D.C.: The UrbanInstitute Press, 2004.Approaching Performance Management in Local Government: A Guide. Council of Europe,Centre of Expertise for Local Government Reform, Directorate of Cooperation for Local andRegional Democracy, Council of Europe, Strasbourg.Armenia Poverty Reduction Strategy Paper. Progress Report. (2004-2005) First Term. Yerevan2006.Bangalore City Indicators Programme. Government of Karnataka, Bangalore MetropolitanRegion Development Authority, December 2000.DAI/SLRP CitiStat Implementation. Development Alternatives, Inc. Working Paper (Undated butprobably 2006.0Every KID COUNTS in the District of Columbia: 13th Annual Fact Book. D.C. KIDS COUNTCollaborative for Children and Families, 2006.Georgia Customer Survey 2004. USAID Local Government Reform Initiative (LGRI) in GeorgiaGeorgia Local Government Reform Initiative Final Report. USAID Georgia Local GovernmentReform Initiative, January 2005.How Effective Are Community Services? Procedures for Performance Measurement.Washington, D.C.: The Urban Institute and the International City/County ManagementAssociation, 2006.Hatry, Harry. Performance Measurement: Getting Results. 2nd Edition. Washington, D.C.: TheUrban Institute Press, 2006.Indicators for Monitoring the Millennium Development Goals. Definitions, Rational, Concepts,and Sources. New York: United Nations, 2003.Kavaja Municipality, Albania. City Cleanliness Rating by Trained Observer Ratings Approach.November 2006.Key Steps in Outcome Management. Washington, D.C.: The Urban Institute Press, 2003.Localizing the Millennium Development Goals: A Guide for Municipalities and Local Partners.UN-HABITAT, United Nations Human Settlements Programme, Nairobi, Kenya, March 2006.Mark, Katharine. Using Performance Management to Strengthen Local Services: A Manual forLocal Governments in Ethiopia. Washington, D.C.: July 2006. 110
  • The Millennium Development Goals Report 2006. United Nations. New York, 2006.Progress Report on Regional Development Strategy of Fier Region. UNDP Albania, November2005Republic of Albania. Law on Organization and Functioning of Local Governments, July 31,2000.State of the Subways Report Card. NYPIRG Straphangers Campaign, Summer 2004. Available athttp://www.straphangers.org/Surveying Clients About Outcomes. Washington, D.C.: The Urban Institute Press, 2003.Urban Indicators Guidelines: Monitoring the Habitat Agenda and the Millennium DevelopmentGoals. United Nations Human Settlements Programme, August 2004.Using Outcome Information: Making Data Pay Off. Washington, D.C.: The Urban Institute Press,2004. 111