Assessment of Local Governance and Development  Performance in Indonesia
Upcoming SlideShare
Loading in...5
×
 

Assessment of Local Governance and Development Performance in Indonesia

on

  • 1,937 views

 

Statistics

Views

Total Views
1,937
Views on SlideShare
1,937
Embed Views
0

Actions

Likes
1
Downloads
65
Comments
1

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Hi

    Thanks for your paper. I am trying to find a copy of the Regional government performance measurement system - can you tell me where I might find an English version? Can you please email me - janedwar@gmail.com

    Thanks
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Assessment of Local Governance and Development  Performance in Indonesia Assessment of Local Governance and Development Performance in Indonesia Document Transcript

  • Assessment of Local Governance and Development Performance in Indonesia: Current Models, Challenges and Future Perspectives1 Dr. Astia Dendi2 AbstractThis paper explores the current approach and tools of performancemeasurement implemented by the government of Indonesia in thedecentralized governance context. Following the 1997’s financial crisis,Indonesia embarked on implementing the so-called “Big Bang”decentralisation policy which actually began in 2001. It has led to thedevolution of authorities to provincial and local governments. Thus, these sub-national governments have more opportunities and responsibilities to developtheir regions according to local needs and people’s aspiration involving theprivate sector and civil society organizations. Within the framework of nationaldecentralization and an increasingly competitive global context, sub-nationalgovernments have made tremendous efforts toward the provision of publicservices, to achieve sustainable economic growth and reduce regionaldisparities and poverty. However, the implementation of decentralisationpolicy has not always been a smooth process nor has it effectively generatedall desirable changes and impacts. The current shortcomings call for a betterpolicy, practical tools and systemic capacity building for performancemanagement.A long lasting debate at policy and implementation levels has been onefficiency, applicability and comparability of the different approaches and toolsfor local governments’ performance assessment introduced by the centralgovernment agencies including by the Ministry of Finance, the Ministry ofHome Affairs and the Ministry of Administrative Reform as well as the Ministryof National Development Planning (BAPPENAS). It can be concluded that thecurrent models of performance assessment in Indonesia overwhelm the localgovernments rather than stimulate and capacitate them to enhance theirlearning process, strengthen their capacity for policy formulation and strategyalignments in addressing local/ regional issues and emerging globalchallenges. Moreover, it turned out that the central government-driven modelsof performance assessment were notably rich in their number of indicatorsbut rather poor in term of attainability or accessibility of verifiable data in cost-effective ways. We suggest a nested model of performance assessment that1 A paper submitted to International Association of Schools and Institutes of Administrations (IASIA) for a presentation in the coming congress entitled “Public Sector Strategy for Overcoming Growing Global Inequality, to be held in Bali, Indonesia from 12-17 July 2010.2 Dr. Astia Dendi, Senior Advisor of Decentralisation as Contribution to Good Governance (GTZ DeCGG). Kementerian Dalam Negeri Republik Indonesia, Jl. Meredeka Utara No.7 Jakarta. Telephone +62 21 351 1584; Fax +62 21 386 8167; Email: Astia.Dendi@gtz.de. The perspectives presented in this paper do not represent official views of Deutsche Gesellschaft fuer Technische Zusammenarbeit (GTZ) GmbH.
  • integrates three spheres of performance evaluation namely financial/budgetperformance, governance performance and development performance. Inaddition, the paper discusses lessons learned and suggests process andsteering structures toward effective implementation of the nested model oflocal governance/ development performance evaluation.Keywords: Good governance, Indonesia, local governance, performance assessment, financial performance, governance performance, development performance.1. Introduction1.1 BackgroundFollowing the financial crisis, Indonesia since 1999 embarked onimplementing a decentralisation policy that was called “Big Bang” by theWorld Bank due to its radical shift in authority from national to subnationalgovernment. Implementation actually began in 2001 and has led to thedevolution of authorities to provincial and local governments. Indonesia is aunitary state that presently consists of 524 autonomous regions (33provinces, 398 districts, and 93 municipalities (MoHA, 2009). The sub-nationalgovernments have more opportunities and responsibilities to develop theirregions according to local needs and people’s aspirations, involving theprivate sector and civil society organizations. Within the framework ofdecentralization and an increasingly competitive global context, central andsub-national governments have made tremendous efforts towards improvedprovision of public services, achievement of sustainable economic growthand reduction of regional disparities and poverty. However, theimplementation of the decentralisation policy has not always been a smoothprocess nor did it effectively generate all desirable changes and impacts.Some challenges remain whichamong others include the continuation ofbureaucratic and governance reform along with fiscal decentralization andenhancement of cross-sectoral and vertical policy coherence.Meanwhile, the National Mid-Term Development Plan (RPJMN) of 2010-2014has put strong focus on the need to establish better inter-sector coordinationand national-regional/local interaction to pursue economic development andpeople’s welfare improvement (Agenda I). Along with that, the RPJMN 2010-2014 also emphasizes the improvement of governance (bureaucratic andgovernance reform) which also calls for a more sytematic and holisticimplementation of performance management including planning,performance-based budgeting and performance measurement.Policy makers and some prominent civil society organizations in Indonesiahave increasingly considered performance measurement as a managementinstrument to enhance the quality of services produced by public sectoragencies. Furthermore, it is believed that an appropriate system ofperformance management would promote organizational learning andstrengthen customer-orientation among public agencies. One of the 2
  • fundamental challenges, however, has been the establishment of a coherentregulatory framework, followed by effective collaborative efforts to improvethe performance of public agencies at all levels.1.2 Scope and Structure of the PaperThis paper explores the current approach and tools of performancemeasurement implemented by the government of Indonesia in thedecentralized governance context. We analyze and discuss the underlyingconcepts and legal frameworks of performance measurement applied inIndonesia. Prospects and challenges facing the implementation of outcome-focused performance measurement based on day-to-day observations duringfive recent years as well as local stakeholders’ perspectives are discussed.The paper concludes by addressing some outstanding issues and action-oriented recommendations to make the outcome-focused performancemeasurement work better.2. Performance Measurement System in the Public Sector: A Theoretical Perspective2.1 Means-Ends Structure and Performance MeasurementGovernments in developing countries, including the Indonesian government,have increasingly drawn attention to the application of performancemeasurement along with the notion of bureaucratic reform and goodgovernance practices. In the public sector context, performance measurementis considered as a connector between information and management decision-making for the benefits of the public. By using valid and reliable instruments,performance measurement will provide elected officials with soundinformation and profound understanding on policy outcomes and constraintsand, thus, will drive adjustment or alignment of policy or programme to creategreater vertical and horizontal coherence through evidence-based policymaking and programme planning. A reliable performance measurement will,therefore, enhance the efficiency of resource use. Furthermore, it willenhance effectiveness (achievement of the planned policy or programmeoutcomes), transparancy as well as vertical and horizontal accountability.However, stakeholders lack a common understanding or even a “persistent”debate on the meaning and design of reliable and cost effective instrumentsto measure public sector performance.Indeed, the framework of performance measurement can be traced back tothe Logical Framework concept (means-ends structure) introduced byUSAID in the late 1960s (Dale, 2003). The logical framework helpsdevelopment planners to structure project strategy along logical effect-chains(linear means-ends relationship) from transforming inputs into outputs towardspecified goals. In addition, the logical framework addresses anotherdimension of development strategy namely indicators and achievement 3 View slide
  • targets (performance measures) which generally consist of output, purposeand goal to be monitored and evaluated.In the context of regional development, for example, Dale (2003) introducesan extended means-ends structure in a logical framework for planning andmonitoring [or performance measurement]. The model follows the logicalcause-effect chains in line with the “convention” of development science.Dale’s notion of logical framework has the following means-ends structure:“Activities (with input indicators)–Outputs–Immediate Objectives–Effect Objectives– Development Objectives”Dale’s means-ends structure is clearly an objective-centered model. Hemaintains the argument that any development effort should ultimately bringabout improvement of the quality of people’s life. Therefore, developmentprogrammes or services should focus on creating benefits for specifiedbeneficiaries (target groups) as well as impacts for other people (broadertarget population). In this perspective, Dale (2003) furthermore argues thatthe output, namely the products or services generated through utilization ofinputs in implementation tasks (activities), tell nothing about benefits for thepeople; therefore development should go beyond outputs and the means-ends structure should show specific and unambigious immediate changestoward desirable benefits in terms of improvement in quality of life of people.Furthermore, it is noticable that Dale (2003) differentiates objectives into threehierarchies (means-ends structure) namely the immediate objective, effectobjective, and development objective. The notion of immediate objectiverefers to immediately intended results of the outputs for specifiedbeneficaries. The notion of effect objective refers to direct and unambigiousintended improvements for intended beneficaries as the direct effect ofimmediate objective achievement. By contrast to these objectives, the termdevelopment objective refers to the overall intended impacts of anyplanned development intervention or service; this objective should clearlyexpress benefits in terms of some improvement in the quality of life of people.In contrast to Dale’s result chains (means-ends structure), GTZ introduced arevised result chains model recently (Sckeyde and Wagner, Eds., 2008: page8) as follows: Inputs – Activities – Outputs – Objectives –Indirect ResultsGTZ defines outputs as short term results of the activities in terms ofequipment, materials or services and these are available for use by otheractors (Sckeyde and Wagner, Eds., 2008: page 7). The notion of ‘use ofoutputs’ expresses the change process that intermediaries and target groupsundergo in order to achieve specified objectives. In this perspective, theobjective is defined as the direct results among intermediaries and targetgroups with clear causal link to activities and thus can be quantitativelyattributed to an individual measure (Sckeyde and Wagner, Eds., 2008: page7). The notion of ‘indirect results’, however, can no longer be causally andquantitatively attributed to an individual measure; these results depend on 4 View slide
  • inputs from many other actors whose share in the overall change may beplausibly demonstrated but can not necessarily be isolated or quantified(Sckeyde and Wagner, Eds., 2008: page 7).Based on these two models, Dale (2003) and GTZ (Sckeyde and Wagner,Eds., 2008) and the given examples, some common perspectives can beidentified. The first is, that the ‘output’ is produced within the serviceprovider’s system while the further change process and results/ objectivesoccur in the ‘client system’ (intermediary and target population). Secondly,Dale’s notion of ‘immediate objective’ appears closely compatible withGTZ’s notion of ‘use of output’. Thirdly, both models share the principle thatthe means-ends structure should clearly and unambiguously specifyintermediate changes/ improvements that clarify how outputs will makesignificant effects toward realization of intended benefits for the specifiedtarget population. The term ‘specific and unambiguous’ changes in bothmodels call for appropriate selection and formulation of clear and measurableindicators of the intended changes/benefits. The notion of ‘measurableindicators’, however, should not be interpreted as merely quantitativeindicators but also include qualitative indicators or a combination of both.In contrast to these two models, some practitioners develop other modelsalthough theyshare the convention of “if-then” logic in the academicliterature. Schmidt (2009:32), for example, introduces the following means-ends structure: “Input – Outcome – Purpose – Goal”. Schmidt’s notion ofoutcome refers to specific results that a project team must deliver bymanaging inputs, while the purpose refers to the anticipated impact of doingthe project (change expected from producing outcomes). Furthermore,Schmidt (2009:32) defines Goal as the high level, big-picture strategic orprogram objective to which the project contributes.Furthermore we explore how scholars and practitioners define performancemeasurement. We recognized that there exist different perspectives,meanings, and scope of what the performance measurement is about(Mardiasmo, 2002; Hatry, 2006; Jantz, 2008). Hatry (2006: page 3) definesperformance measurement as regular measurement of the results (outcomes)and effeciency of services or programs. Furthermore, the Hatry’s notion ofservice effeciency differ from the widely adopted concept of output-basedeffeciency (input-to-output ratio). In contrast, Hatry suggests outcome-basedeffeciency indicator concept. For years, scholars share and maintainarguments that the input-to-output ratio as indicator of effeciency is risky for asimple reason that, the effeciency can be increased by reducing the quality ofoutput. Instead of using “cost per client served”, for example, it would be moreaccurate to use “cost per client whose condition improved after services” asefficiency indicator (Hatry, 2006). The first ratio is an output-based while thesecond ratio is an outcome-based efficiency indicator.Furthermore, the emphasis on regular measurement of progress towardspecified outcomes, Hatry argues, is a vital elements of customer-orientedprocess at managing-for-results to ensure maximum benefits and minimumnegative consequences for customers of services or programs. Furthermore, 5
  • Hatry (2006) argues that beside regular tracking for budget purposes,managers in public sector need more frequent outcome information to assessthe success of their programm activities, identify significant problems toachieve the specified outcomes as well as to motivate personnel to strive forcontinues service improvement. However, performance measurement hasanother vital goal, namely to assess and ensure equity. Hatry (2006)maintains argument that a well-designed measurement system will enableagency managers to assess the fairness of services or programs and adjustthem appropriately.More recently, GTZ Sector Network “Governance Asia” (2010: 7-8) definesperformance measurement“... as measurement on recurring basis of the outcomes or results andeffectiveness of services or programs. The operative term in this instance isrecurring. Consistent measurement or advancement toward specific outcomesis a critical part of all management efforts to impove results. An importantaspect of performance measurement in local government is its customerorientation. Local government focuses on increasing benefits and decreasingnegative consequences. This can refer to citizens who receive direct servicesor individuals or business affected by policy decisions or service delivery...”From a system perspective, Mardiasmo (2002) defines performancemeasurement as an assessment tool which assesses strategy implementationthrough financial and non-financial measures. In addition, Mardiasmo (2002)emphasizes that budget is one of the financial measures used to assessstrategy implementation; it is a primary instrument of many function ofdecision which is used as a tool to achieve organization goals.Jantz (2008) defines performance measurement as regular collection,recording and evaluation of performance data. With reference to Hood (2007)3he reveals two different performance measurement systems: 1. Target system that measure current performance of a period (using previously defined performance metrics); 2. Rankings that measure current or past performance in relation to other comparable entities (often known as benchmarking). The objective here is to inform customers about entity’s performance or to provide political decision-makers with starting points for increasing performance.Scholars and practitioners recognize that performance measurement is amanagement tool to maximize success. In line with this perspective, the Joint3 Christopher Hood. 2007. Public service management by numbers: Why Does it Vary? Where Has it Come From? What are the Gaps and the Puzzles? In Public Money and Management, April 2007 Vol 27 (2) 95-102. 6
  • National Performance Management Advisory Comission of United States ofAmerica and Canada (NPMAC, 2009) defines performance management as4, “...is an on going, systematic approach to improving results through evidence-based decision making, continous organizational learning, and a focus on accountability for performance...”Along with this definition, the Commission elaborates that performancemanagement is integrated into all aspects of an organizations’s managementand policy-making processess and transforms an organization’s practices sothat they are focused on achieving improved results for the public.Performance measurement is part of a broader system of performancemanagement. In public institution context, performance managementconsists of systematic recording and tracking of performance of publicorgnanisations in order to promote a continous improvement process.Furthermore, we uncovered from academic and practitioners literature thatperformance measurement and reporting are critical elements ofperformance management; however, they are insufficient to promoteorganization’s learning and improved outcomes for public (Thiel and Leeuw,2002; Dale, 2003; NPMAC, 2009). Thus, the notion of performancemanagement, goes beyond measurement and reporting of organization’sperformance; it attempts to systematically use performance data as well asother tools to promote continous learning for improvement, and strengthenorganization’s focus on results (Sckeyde and Wagner, Eds., 2008; NPMAC,2009; Kompas, 20105)2.2 Types of Indicator in Performance Measurement SystemBoyne (2002) reports that scholars and practitioners generally drawperformance indicators upon two corresponding but not entirely consistentmodels of organizational performance that contain a sequence of serviceproduction process. The first model links economy6, efficiency andeffectiveness (3Es model) as key dimensions to measure organizationperformance (Boyne, 2002: page 17). The second model (Boyne 2002: page17) is a linear structure of inputs-outputs-outcomes (IOO model). The latterhas been discussed under Section 2.1 taking the Dale’s and GTZ models asexamples.4 National Performance Management Advisory Commission. 2009. A Performance Management Framework for State and Local Government: From Measurement to Management and Improving, www.gfoa.org/.../PMCommissionFrameworkPUBLIC-REVIEW- DRAFT.pdf5 Kompas News, 6th of May 2010 page 4: “Komisi II Pertanyakan Pemda”.6 The term ‘economy’ (Boyne, 2002) is frequently equated with the level of spending on a service, but it is more accurately defines as the cost of procuring specific service inputs of given quality. 7
  • In the 3Es model, the term ‘economy’ (Boyne, 2002) is frequently equatedwith the level of spending on a service, but it is more accurately defined as thecost of procuring specific service inputs of given quality (for examplepremises, staff, and equipment). However, this notion of economy is notwithout questions and it involves political issues. While the notion of economyleads authorities to seek to minimize the price paid for inputs or productionfactors, there is no common perception in literature whether squeezing thewages for labours or civil servant is good or bad performance. Moreover,Boyne (2002) maintains the argument that high or low spending in it revealsnothing about service standards, or the success or failure of local authorities.In contrast to economy, economists define efficiency in term of ‘technicaleffeciency’ and ‘allocative efficiency’ (Boyne, 2002). The first refers to cost perunit of output, while the later refers to the responsiveness of services to publicpreferences. However, the 3Es model ususally adopts the notion of ‘technicaleffeciency’ (Boyne, 2002).The concept of effectiveness refers to the achievement of the formalobjectives of policy or services (Boyne, 2002; Dale, 2003; Sckeyde andWagner, Eds., 2008).Recent academic and practitioners literature (Boyne, 2002; Dale, 2003;Sckeyde and Wagner, Eds., 2008; Dendi et al., 2010), however, informs thatpublic organizations increasingly adopt a model with broader criteria ororganizational performance. This model corresponds to the means-endsstructure discussed in Section 2.1. However, the fundamental structure of themodel involves “IOO” (Input-Output-Outcomes) indicators. As well, the “IOO”model embraces all elements of the 3Es model (inputs include expenditure,efficiency is the ratio of outputs to inputs, and outcomes include formaleffectiveness). More over, the “IOO” model (Boyne, 2002) or itscorresponding extended models (Dale, 2003; Sckeyde and Wagner, Eds.,2008; Dendi et al., 2010) not only make the “implicit” from the 3Es model to“explicit”7, but also emphasis to adopt both quantitative and qualitativedimensions of performance along with equity or fairness criteria. The equityor fairness of service provision can be assessed, for examples, by thedistribution of outputs and income by gender, age, race, and geographicalareas (Boyne, 2002).Performance measurement system in Indonesia adopted an extended “IOO”model as stipulated among others in Law 25 of 2004 regarding NationalDevelopment Planning System, Government Regulation (GR) 39 of 2006 onSupervision and Evaluation of Development Plan Implementation, Decision ofthe Head of State Administration Institute (LAN) 239/IX/6/8/2003 on guidelinesof performance reporting for governmental agencies, and GovernmentRegulation 6 of 2008 on Guidelines of Evaluation of Local Governance7 Readers interested to further explore conceptual advantages and disadvantages of the “IOO” model can refer to Boyne (2002: page 18). 8
  • Implementation. The model involves a set of performance indicators namelyInput, Output, Outcome, Benefit, and Impact.From academic and practitioners literature we identified that differentperformance measurement system used different sets of indicators reflectingdifferent scope and focus of performance measures. OECD (2009: page 32)recognizes four categories of performance indicators, namely: • Input measures: reveal what resources (e.g. people, money, and time) are used in what amounts to produce and deliver goods and services; • Process measures: reveal the way in which activities are undertaken by a programme or project with the resources described; • Output measures: capture the goods and services activities produced (e.g. number of Small and Medium Enterprises served, kilometres of road built); • Outcome measures: capture the dimension that is expected to change as a result of an intervention (policy, programme, or project) and the outputs produced.GTZ Sector Network “Governance Asia” (2010) recognizes three categoriesof performance incdicators consistent with the GTZ means-ends structure(Sckeyde and Wagner, Eds., 2008) but not explicitly consistent with theOECD’s notion of performance indicators, particularly in terms of processindicators: • Ouput-oriented indicators: these indicators focus on the direct output of administrative actions Moreover, the primary relation of output indicators is with the input. Thus, output indicators always refer to efficiency and do not capture the notion of effectiveness; • Outcome-oriented indicators: These indicators attempt to establish a relation between the effects of political decision making and subsequent activities. The outcome may include intended as well as non-intended effects and, therefore, outcome-oriented indicators refer to effectiveness; • Process-oriented indicators: these indicators attempt to capture quality in a wider sense.Furthermore, we observed and found in literature (Boyne, 2002; LGSP, 2009;Dendi et al., 2009) that In practice, however, the “IOO” model remainsvulnerable to inappropriate orientation on quantitative input and outputmeasures of performance. We further discuss this issue in the sections tocome. 9
  • 3. Performance Measurement System in Decentralized IndonesiaPerformance measurement in Public sector has multiple dimensions. InIndonesia there exists numerous public sector performance measurementinstruments institutionalized or enacted through Government Regulation(GR). Table 1 presents a comparative summary of performance reporting andmeasurement instruments currently applied in Indonesia. We focuse on fourperformance measurement and evalution instruments. The first is LAKIP (Annual Performance Accountability Report ofGovernmental Institutions). It began effective in 1999 according toPresidential Instruction Number 7 of 1999. It was then consolidated throughthe operational guideline issued by the State Administration Agency(Decision 239/IX/6/8/2003). It was the first attempt to establishing aperformance accountability system in Indonesia (Ind. SAKIP-SistemAkuntabilitas Kinerja). It relies on self-assesment approach in performancemeasurement and reporting to ensure that individual governmental agencycomply with the established operating procedures and standards (processindicators) and attempts to strengthen result-orientation (output and outcomeindicators) among governmental agencies in any development intervention orservice. While the process indicators refer to the norms, standards andprocedures applied in the Indonesian administration system, the resultsindicators refer to performance indicators set up by government agencies intheir strategic plan and action plan documents. Conceptionally thesestrategic and action plans must be coherence with the national mid-termdevelopment plan.The second instrument is the Evaluation of Implementation of Regional/ LocalGovernance (EPPD- Evaluasi Penyelenggaraan Pemerintahan Daerah). TheEPPD pursues a number of objectives, namely: (i) assess the performance ofsub-national governance in an attempt to increase governance performanceaccording to the principle of good governance; (ii) assess regional/ localcapacity in achieving the specified objectives of regional autonomy, namelyimprovement of people’s welfare, improvement of public service quality andregional competitiveness among others; and (iii) monitor the progress ofestablishment of soft and hard infrastructure for governance (institutional,infrastructure and human resources) in the newly established autonomousregion. To pursue these objectives, the EPPD adopted three differentperformance measurement modules. The first module, namely the evaluationof sub-national (regional/ local) governance performance (EKPPD), refers tothe first objective of performance assessment (EPPD). The second module,namely evaluation of capacity to execute regional autonomy performance(EKPOD), refers to the second objective of performance assessment. TheEKPOD, however, will be executed only in regions graded as poorperformance for three consecutive years. The third module, namelyevaluation of the newly established autonomous region, refers to the thirdobjective of performance assessment. In this paper, however, we discussmainly the first module namely the EKPPD. 10
  • Furthermore, Government Regulation (GR) 6 of 2008 defines “theperformance of implementation of local governance” as results ofimplementation of local governance functions measured through input,process, output, results and benefit/ impact indicators. In line with thisdefinition, EKPPD is defined as a systematic process of collection andanalysis of governance performance data using performance measurementsystem. The EKPPD sets two arena of performance assessment, namely theelected policy makers (policy making level) and implementing agencies atpovincial as well as at district/ municipal levels. Furthermore, the performanceassessment for policy makers involves thirteen (13) dimensions with 63governance-related key performance indicators in total for provincial level,and 74 indicators for district/ municipal level (more detail in Table 2). Thesedimensions, among others, include public order; coherency between regionaland national policies; effectiveness of relationship between regional/ localgovernment and the regional/ local legislative council; effectiveness ofdecision making process by the provincial/ local legislative council;effectiveness of decision making proces by the Head of Region ; Complianceto the regulations and standards; transparency; and innovation ingovernance. Meanwhile, the performance assessment for implementingagencies involves nine (9) dimensions with a total of 151 and 174 keyperformance indicators for provincial and district/ municipal level respectively.The third instrument is EPRPD (Evaluation of Sub-National Development PlanImplementation). It includes assessment of inputs-outputs relation andrealization as well as achievement of outcomes (including benefits/ impacts)of development programs or services using the regional (province) and local(district/ municipal) budgets. Indeed, article 51 of GR 8 of 2008 necessitateselaboration of Ministrial Regulation for operational guidelines of EPRPD.However, that regulation has not yet been prepared. The leading agency forEPRPD at national level is the Ministry of Home Affairs. At provincial level, theHead of Provincial Development Planning Board is in charge of conductingthe EPRPD. Similarly, the Head of Local Development Planning Board is incharge of conducting the EPRPD at district/ municipal level.The fourth instrument refers to the supervision and evaluation of nationaldevelopment plan implementation (Ind. PEPRP) according to Law 25 of 2004and GR 39 of 2006. It focuses on programs financed through national budgetincluding programs financed through deconstration and co-administrationschemes. The leading agencies of PEPRP at anational level is the StateMinistry of National Development Planning (BAPPENAS). It involves quarterlyprogress monitoring, evaluation of the implementation of annual plan ofMinistries/Non-Ministerial State Institutions, and evaluation of theimplementation of mid-term (five years) development plans of Ministries/Non-Ministerial State Institutions. Both progress monitoring and evaluation ofannual plans were done quarterly, while the evaluation of mid-termdevelopment plans must be conducted at least once during the five-yearimplementation period, latest 1 year before the end of period. The mid-termdevelopment plan evaluation focuses on measuring outputs, outcomes, 11
  • macro-economic framework condition/ impacts as specified in the mid-termdvelopment plan documents.Table 1: Overview of Instruments of Performance Measurement for Central and Sub-National GovernmentsInstrument Leading Agency Indicators Remarksand Arena of and ParticularAsssessment Legal Base Outcome Process Benefit/ Output Impact InputLAKIP Ministry of Civil yes yes yes <yes <yes < yes denotesAll central level Service and that althoughand sub-national Bureaucratic conceptionally Reform; the indicatorlevelgovernmental Presidential must beinstitutions. Instruction Number reported, we lack the 7 of 1999 and the operational guideline evidence that from State these indicators were Administration Agency (Decision appropriately 239/IX/6/8/2003); reported.EKPPD Ministry of Home yes yes yes yes yes(Module 1 of Affairs (MoHA)EPPD) Law 32 of 2004 onPolicy makers Regional/ Local(Head of ) Governance;Provincial GR 3 of 2007 onGovernments; sub-nationalDistrict/ Municipal Government reportGovernments; to CentralProvincial/ Local Government andParliaments information to(DPRD); and provincial/ localImplementing parliament);Agencies at GR 38 of 2007 onProvincial and FunctionalDistrict Levels Assignments;(SKPD). GR 41 of 2007 on Organization of Local Government; GR 6 of 2008 on Evaluation of Local Governance Implementation; Regulation of MoHA 73 of 2009; 12
  • Instrument Leading Agency Indicators Remarksand Arena of and ParticularAsssessment Legal Base Outcome Process Benefit/ Output Impact InputEPRPD Ministry of Home yes yes yes yes ?(Evaluation of Affairs (MoHA,Sub-National Governor, Regent/Development Mayor);PlansImplementation): Law 32 of 2004Provincial & Local (Article 154);Governments and GR 8 of 2008 on Steps, ProceduresImplementingAgencies of Formulation,(SKPD)8. Supervision and Evaluation of Sub- National Development Plans;• Evaluation of Ministry of National National Development Development Planning Plans (BAPPENAS) Implement- ation: Law 25 of 2004 on National Development Planning System; GR 39 of 2006;• Progress Monitoring of [*Ref. Article 4 of Yes* Yes* Yes* No No Implementation Para 4 under of Development Chapter two of GR Plans; 39 of 2006;• Evaluation of Implementation Yes* Yes* Yes* Yes** No of Annual Plan **Ref. Article 3 of of Ministries/ Para 13 under Non-Ministrial Chapter two of GR State 39 of 2006 reg Institutions evaluation of the Annual Plan of Ministry/ Institution & Annual Plan of Government];• Evaluation of Yes Yes Yes Yes*** Yes*** Strategic Plan [*** Article 5 of Para8 This evaluation is primarily based on quarterly report of respective agencies to the Head of BAPPEDA (Development Planning Board). 13
  • Instrument Leading Agency Indicators Remarksand Arena of and ParticularAsssessment Legal Base Outcome Process Benefit/ Output Impact Input of Ministries/ 15 under Chapter Non-Ministrial three as well as in State the Elucidation of Institutions & GR 39/ 2006 reg. National Mid- Evaluation of Term Strategic Plan of Development Ministries/ Non- Plan Ministrial State Institutions & National Mid-Term Development PlanTable 1 shows that these existing performance measurement instrumentshave some shared objectives and measure the same dimensions ofperformance (economy, effeciency and effectiveness) and the same level ofindicator along results chain (input-process-output-oucome-benefit/ impact).In addition, we uncovered that these instruments put too strong focus onquantitative measures of performance (particularly input and outputs) andpay less attention to the qualitative dimension of performance. In addition,all instruments overlook the different context that influence [local]government’s performance; this may lead to punishing inappropriately poorregions working under difficult conditions or rewarding inappropriately better-off regions working under favorable context. We further discuss these issuesand other problems under Section 4.Table 2: Arena and number of performance indicators at provincial, municipal and district levels Province Municipal District Policy Implement- Policy Implement- Policy Implement- level ation level Level ation level Making ation level Level General 28 28 28 Administration Obligatory 107 120 120 Functions Discretional 16 26 26 Functions Total 63 151 74 174 72 174 14
  • The number of performance indicators involved for the assessment of localgovernance performance require robust and plausible baseline data andmilestones to measure the progress. In addition it will require tremendousefforts and strong commitment to collect evidence and report the progress ofachievements. Sub-national governments, however, have not only to do self-assessment and deliver report according to the EKPPD framework but alsofor other performance evaluation schemes as presented in Table 1. Overall,sub-national government has to deliver more than 50 different kinds ofpetformance and accountability report annually.Table 3 highlights selected dimensions and example of performanceindicators for policy makers and implementing agencies at provincial levelonly according to EKPPD framework.Tabel 3: Some dimensions and key performance indicators (KPIs) for local governance performance (EKPD) at Provincial levelArena Performance Dimension Examples Key Performance IndicatorsPolicy Makers Public Order (10 KPIs) • Availability of Provincial Regulation on Spatial(63 KPIs) Plan (Yes/ No); • Proportion (%) of District/ Municipal in the province having Regulations on building permit; • Frequency of demonstration against Provincial/ District/ Municipal Regulations; Coherency between • Proportion (%) of regional development regional and national priorities in coherence with national policies (13 KPIs) development priorities; • Obligatory functions (%) implemented by Provincial/ District/ Municipal Government according to GR 38 of 2008 (26 obligatory functions in total); • Ratio of expenditure for health and education to total expenditure (%); • Proportion of district/ municipalities of the province establishing regulations on public service (%); • Coherency between provincial/ local governance structure with GR 41 of 2008; Intensity and • Number of public consultation conduceted; effectiveness of public • Existence of information media thatis consultation (2 KPIs) reinforced by Governor’s regulation and accessible by public; Innovation in local • Ratio of budget allocated for innovative governance (10 KPIs) governance to total expenditure (%); • Economic growth (GRDP growth); • Establishment of E-procurement (Yes/ No);Implementing Technical Policies of • National programs implemented by 15
  • Agency: governance (2 KPIs) implementing agencies at regional/ local level General (%); Adminstration • Existence of Standard Operating Procedures Aspect (Yes/ No); Institutional Structure (3 • Aggreement of organizational structure, tasks KPIs) and functions assignment with GR 41 of 2007; • Number of functional positions without occupant; Regional/ Local • Existence of planning documents (numbers); Development Planning • Number of programms in regional development (4 KPIs) plan documents accomodated in the implementing agency’s development plan documents; Local Financial • Ratio of implementing agency’s budget to total Management (5 KPIs) expenditure of region; • Ratio of capital expenditure to total expenditure of the implementing agency; Implementing Health Sector (11 KPI) • Live expectancy; Agency: • Number of villages (%) included in the example of Universal Child Immunization Service; Obligatory • Number (%) of undernourished children under functions five years old getting appropriate medical treatments; • Number of identifiable dengue fever cases treated appropriately;Overall, the EKPPD for policy making and imlementation levels captured mainlythe process and output-oriented indicators. However, for the case of obligatoryfunctions (at implementation level) a few number of outcome-oriented andimpact indicators were included, such as life expectancy, regional economicgrowth, increased productivity of food crops, sectoral contribution to GDRP,and reduced unemployment rate. 4. Prospects and Problems in the Implementation of Performance Measurement 4.1 Prospects The Indonesian government has apparently made tremendous efforts and innovation to establishing performance measurement system as a management tool to improve results and its public accountability. Conceptionally the Indonesian performance measurement system adopts an extended “IOO” (input-output-outcome) model in which the outcome is further elaborated into benefit and impact. Adoption of this model could stimulate change toward estabalishing more efficient and responsive goverment and strengthening focus on results along the principles of good governance. We observed that implementation of performance measurement, reporting and evaluation has considerably induced multiple stakeholders’ debates and dialogues which, in turn, could help improve the quality of governance and, thus, the quality of democratic and economic outcomes at all levels. 16
  • However, implementation of performance measurement in decentralizedcontext in Indonesia apparently faced a number of fundamental chalenges.We discuss some of these challenges in the following section.4.2 Issues and Problems4.2.1 General Issues and ProblemsWe recognized that numerous existing sectoral laws and regulations couldprovide plausible legitimicacy for the central and sub-national levelgovernments to plan, allocate resources and implement performancemeasurement. These laws and regulations also provided doorway for civilsociety organizations to participate in performance measurement andevaluation. However, these sectoral regulatory frameworks brought about notonly legitimacy and greater potential toward creating better governanceoutcomes but also notable conflicting orientation, overlapping authorities andmissmatching criteria and indicators of performance (e.g. GR 3 of 2007 andGR 6 of 2008). Furthermore, inter-ministrial platform to coordinate andsincronize regulatory frameworks and instruments of performancemeasurement has not been effective due to lack of functional clarity andcollective commitment among others.Meanwhile, majority of eligible civil society organizations lack the incentive,secure financial resource and technical capacity to establish and mobilizemulti-stakeholders’ processes to make performance measurement works forenhancing evidence-based policy making and strengthens outcome-orientation among public agencies at all levels. However, indepth analysis onregulatory framework and its consequences to effectiveness of perfromancemeasurement is beyond the scope of this paper.Some previous practioners’ and our own observations added evidence tosupport the multiple principals and multiple tasks (goals) trade-off hypothesis(Propper and Wilson, 2003). According to this hypothesis, a public sectoragency inevitably serves multiple stakeholders ranging from self-interestedindividuals, the service users, tax payers, politicians, private sectororganizations and civil society organizations at different levels among others.This feature, in one way or another, determines public sector organizationsto pursue multiple ends (multiple goals) which are not necessarily coherenceeach other. These multiple and incoherence goals created difficulty inselecting appropriate performance indicators and in avoiding the “3Es trade-off” (Economy, Effeciency and Effectiveness). For instance, to pursue thegoal of improved quality and scope of health service for the poor people, localgovernments do have difficulty to match both effeciency and effectivenesscriteria. This is, of course, not to say that a public agency has unchallengedlegitimacy to deliver service for public at any cost. Elected policy makers faceincreasing challenge to provide services in more accountable, efficient andcost effective ways. 17
  • Furthermore, contextual diversity and inter-regional disparity in Indonesiareasonably mobilized local governments and other stakeholders includingpoliticians to challenge the application of uniform criteria and indicators oflocal governance performance. This debate calls for improvement ofmethodology including selection of appropriate criteria and key performanceindicators along with appropriate weighting of performance scores.These call for more systematic and coherence performance measurementsystem, improvement of indicators selection process along with capacitybuilding efforts, particularly at regional and local levels.4.2 Issues and Problems at Regional/ Local LevelThe problems inherent in the regulatory framework of performancemeasurement as discussed in Section 4.1 have far reaching consequencesto sub-national level governments and other stakeholders. The response ofsub-national government and legislative councils toward the centralgovernment-led performance measurement varied considerably. A fewnumber of sub-national governments, e.g. Solok district of West Sumateraprovince, Sragen district of Central Java, Tarakan City in Kalimantan,Jembrana district in Bali as well as a number of regions in East Java andEastern Indonesia (e.g. Mataram City and Bima City) were widely recognizedfor their “best practices” innovation in local governance and performancemanagement among others. However, in turned out that outcome-orientedservices and performance measurement had not been fully adopted asgovernance perspective and instrument by the majority of sub-nationalgovernments. We observed that support structure to coordinate andimplement outcome-focus performance measurement was either not in placeor ineffective. Moreover, many performance reports maintained the “tradition”of reporting financial inputs and public expenditures to produce physicaloutputs and services. Furthermore, sub-national governments in most regionslacked reliable and consistent baseline data and benchmarks for theformulation of realistic performance targets as well as for milestones ofperformance measurement. Although the central and sub-nationalgovernments have made substantial efforts to improve availability and qualityof data, following problems remain pervasive in sectoral and regional planningdocuments: • Lack of horizontal and vertical coherence among planning documents (including their objectives framework and performance indicators); • Vague link between outputs and outcomes; • Outputs and outcome indicators were not clearly defined; • End performance targets were defined but no sufficient baseline data were given as milestones for performance measurement.Furthermore, it was well recognized that capacity of human resources toadopt the Input-Output-Outcome (IOO) model in planning and performance 18
  • measurement varied considerably among regions. This limiting factor calls fora more comprehensive capacity building approach including individual,organizational and system levels.5. Lessons Learned and Ways ForwardThe Indonesian government has made tremendous efforts to establishingperformance measurement system as a management tool to improve resultsand its public accountability. Conceptionally the Indonesian performancemeasurement system adopts an extended “IOO” (input-output-outcome)model in which the outcome is further elaborated into benefit and impact. Themodel shows the interest of Indonesian government to strengthen focus onresults and, thus, strengthen its client orientation (benefits/ impacts forcitizens, individuals and business).However, implementation of the adopted model in Indonesia has not broughtsufficient incentive for public agencies at all levels of administration to performbetter. Major problems and obstacles to implement performancemeasurement system in Indonesia are discussed in the previous section. Inshort, we uncovered that inherent institutional complexity along with theoverwhelming data needed for self-assessment and reporting limit itsactionability and effectiveness. The established system is just not doable!Furthermore, we observed that the currently applied performancemeasurement instruments maintain the “tradition” to focus on quantitativemeasures and overlook the value of qualitative dimension of performance.These quantitative measures tell us very little about why a public agencyperforms or does not perform well.In the absence of coherent objectives frameworks, effective supportstructures, clearly defined performance indicators and competent humanresources, the central and sub-national governments do have difficulties inimplementing the concept of outcome-oriented performance measurement aswell as the concept of performance-based budgeting. These problems,however, were not fully indpendent from conflicting political orientation alongwith the multiple principals trade-off as discussed in Section 4.1.One among several critical challenges is, thus, to establish a performancemeasurement system that meets the citeria of good governance, coherentlegal and institutional framework that clarify interlinkages betweenplanning, performance measurement and performance-based budgeting.Understanding of this issue would facilitate the development of a nestedmodel and workable concepts as well as self-dynamic (more clientorientation-driven than regulation-driven) performance measurement system.It is necessary to enhance evidence-based and accountable decision-makingand to strengthen the focus of any intervention on results (outcomes). Towardthese objectives the following perspectives and measures are forwarded: 19
  • The State Ministry of National Development Planning should take a leading role in exploring best practices, facilitating multiple stakeholders’ dialogues toward establing coherent conceptual, legal and institutional frameworks for performance measurement. This is in line with the perspective in the Indonesian Mid-Term Development Plan of 2010-2014.• In our view, impact monitoring and evaluation play a critical role in complementing the recurring performance measurement. It attempts to capture longer-term and aggregated outcomes, namely ends outcomes in terms of benefits/impacts for the public. These ends outcomes, of course, are the results (effects of interactions) of multiple-actors’ efforts. In economic terms, these ends outcomes are indeed the sum of marginal outcomes among multiple actors in governance and development. In contrast to impact monitoring and evaluation, performance measurement should maintain focus on the quality of process and the quantitative and qualitative dimensions of immediate outcomes.• Furthermore, the impact monitoring and evaluation attempts to capture not only the formalistic performance indicators (set up by service providers) but also external stakeholder’s perspective (external indicators). The notion of external indicator measures the satisfaction of the public on responsiveness, effectiveness and accountability of the public service agencies.• Along with the above efforts, Civil Society Organizations should use the opportunities emanating from the decentralization policy and regional autonomy to play an active role to make performance measurement work for improving public services as well as for improving democratic and economic outcomes. This would need a comprehensive capacity building covering individual, organization and system levels. 20
  • AcknowledgementsThe author gratefully acknowledges Dr. Manfred Poppe, Senior ComponentTeam Leader of Decentralisation as Contribution to Good Governance, forhis valuable comments for the revision of this paper. I thank mycolleagues from the Decentralisation as Contribution to Good Governance(DeCGG) Project, particularly M.P. Dwi Widiastuti, Mr. Mario Vieira, Mr. ArifinBhakti for sharing their observations and empirical experience from sub-national level. I also thank Meithya Rose Prasetya and Jevelina Punuh fortheir assistance in literature review and data collection.ReferencesAnonymous. 2009. Hasil penilaian kinerja pelayanan publik kabupaten/ kota provinsi Nusa Tenggara Barat 2009. Pemerintah Provinsi Nusa Tanggara Barat dan GTZ-Good Local Governance (GLG). Unpublished.Anonymous. 2009. Laporan Hasil Evaluasi Pemeringkatan Kinerja Penyelenggaraan Pemerintahan Daerah Propinsi, Kabupaten, dan Kota Berdasarkan LPPD Tahun 2007 Tingkat Nasional. Tim Independen Universitas Gadjah Mada dan Kementerian Dalam Negeri Republik Indonesia.Anonymous. 2010. Laporan akuntabilitas kinerja instansi pemerintah (LAKIP) tahun 2009. Pemerintah Kabupaten Sumba Barat Daya.Anonymous. 2010. Technical paper on performance measurement for local governance. GTZ Sector Network “Governance Asia”, Working Group “Decentralization and Local Governance.Boyne, G.A. 2002. Concepts and indicators of local authority performance: An evaluation of the statutory frameworks in England and Wales. Public Money and Management April-June 2002: pp 17-24. Great Britain.Boyne, G.A., K.J. Meier, L.J. O′Toole Jr, and R.M. Walker. 2006. Public service performance: Perspectives on measurement and management. Cambridge University Press. New York. 319 pp.Bryson, J.M. 2004. Strategic planning for public and non-profit organizations, 3rd Edition: A guide to strengthening and sustaining organizational achievement. Jossey-Bass, A Wiley Imprint. San Francisco. 430 pp.Carol, P., D. Wilson. 2003. The use and usefulness of performance measures in the public sector. CMPO Working Paper Series No. 03/073. www.bristol.ac.uk/cmpo/publications/papers/2003/wp73.pdfDale, R. 2003. The Logical Framework: An easy escape, a straitjacket, or a useful planning tool? Development in Practice (Vol. 13, Number 1: pp 57- 70). Oxfam GB Carfax Publishing. 21
  • Dendi, A., H.J. Heile, S. Makambombu. 2009. Improving the livelihoods of the poor: Challenges and lessons from East Nusa Tenggara. In: Budy P. Resosudarmo and Frank Jotzo, Eds. (2009): Working with nature against poverty; Development, resources and the environment in Eastern Indonesia. ISEAS Publishing. Singapore. pp 305-320.Dendi, A., A. Zaini, M. Afifi, R.Saleh H. 2010. Perencanaan strategik partisipatif pengembangan ekonomi lokal dalam bingkai ekonomi kerakyatan. Pusat Penelitian Kependudukan dan Pembangunan Universitas Mataram. Forthcoming publication.Diamond, J. 2005. Establishing a performance management framework for government. IMF Working Paper.Dwiyanto, A., Partini, Ratminto, B. Wicaksono, W. Tamtiari, B. Kusumasari, M. Nuh. 2006. Reformasi birokrasi publik di Indonesia. Gadjah Mada University Press. Yogyakarta. 274 pp.Francesco, M.D. 2001. Process not outcomes in New Public Management? Policy coherence in Australian Government. The Drawing Board: An Australian Review of Public Affairs (Volume 1, Number 3, March 2001: 103-116. Sydney.Hatry, H.P. 2006. Performance measurment: Getting results. 2nd Edition. The Urban Institute Press. Washington, D.C. 326 pp.Kelly, J.M., and W.C. Rivenbark. 2003. Performance budgeting for state and local government. M.E. Sharpe. Armonk, New York; London, England. 271 pp.LGSP, 2009. Tantangan dalam penyelenggaraan evaluasi kinerja berbasis hasil (outcome-based) untuk Pemerintah Daerah di Indonesia. Local Governance Support Program (LGSP), USAID. www.lgsp.or.id.Mahsun, M. 2006. Pengukuran kinerja sektor publik. Edisi I. BPFE- Yogyakarta. Yogyakarta. 249 pp.McNamara, C. 2006. Field Guide to Consulting and Organizational Development: A collaborative approach and systems approach to performance, change and learning. Authenticity Consulting, LLC. Minneapolis, MN USA. 499 pp.MoHA. 2009. Laporan hasil evaluasi pemeringkatan kinerja penyelenggaraan pemerintahan propinsi, kabupaten dan kota berdasarkan LPPD Tahun 2007 tingkat nasional. Kementerian Dalam Negeri Republik Indonesia (MoHA-Ministry of Home Affairs of Republic of Indonesia).OECD. 2009. Governing regioanal development policy: The use of performance indicators. OECD publishing. www.sourceoecd.org/governance/9789264056282.Schmidt, T. 2009. Strategic project management made simple: Practical tools for leaders and teams. John Wiley & Sons, Inc. Hoboken, New Jersey. 251 pp. 22