Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

How advanced are Italian regions in terms of public eServices


Published on

The study aims at providing evidence on regional differences in the diffusion of ICT in the public sector in Italy, with a focus on different types of public e-services (eGovernment, eHealth, eEducation and Intelligent Transport Systems). Data are ob-tained by merging four different surveys carried out by Between Co. (2010-11) and Istat - Italy’s National Bureau of Statistics (2009). We pursue a three-fold objective. First, we attempt to overcome the prevailing attitude to consider the various domains of public e-service provision as separate from one another. In other words, measuring the progress of digital government requires a holistic view to capture the wide spectrum of public e-services in different domains (e.g. local and national administrative procedures, transportation, education, etc.) and the different aspects of service provision (not just e-readiness or web interactivity, but also multi-channel availability and take-up). Second, we shall tackle a major drawback of existing statistics and benchmarking studies of public e-services, which are largely based on the count of services provided online, by including more sophisticated indicators both on quality of services offered and back office changes. Third, we develop a sound, open and transparent methodology for constructing a public eServices composite indicator based on OECD/EC-JRC Handbook. This methodology, which incorporates experts opinion into a Data Envelopment Analysis, will allow us to combine data on different e-service categories and on different aspects of their development, and will enable us to define a ranking of Italian regions in terms of ICT adoption and public e-service development.

Published in: Technology
  • Be the first to comment

How advanced are Italian regions in terms of public eServices

  1. 1. 1st International EIBURS-TAIPS TAIPS conference on: “Innovation in the public sector and the development of e-services” How advanced are Italian regions in terms of public e-services? The construction of a composite indicator to analyze patterns of innovation in the public sectorLuigi Reggi, Davide Arduini, Marco Biagetti and Antonello Zanfei EIBURS-TAIPS team, University of Urbino University of Urbino April 19-20, 2012
  2. 2. Aims and scope• Providing evidence on regional differences in the diffusion of public eServices in Italy with a focus on – different types of public eServices: beyond a monodimensional analysis based on e-gov diffusion – not only front- but also back-end issues – different channels for service delivery• Providing a sound, open and transparent methodology for constructing a public eServices composite indicator based on OECD/EC-JRC Handbook
  3. 3. Composite indicators (CI) A composite indicator is formed when individual indicators are compiled into a single index, on the basis of an underlying model of the multi-dimensional concept that is being measured (OECD Glossary of statistical terms)• Composite indicators are increasingly used by statistical offices, international organizations (e.g. OECD, EU, WEF, IMF) and academic researchers to convey information on the status of countries in fields such as the environment, economy, society or technological development: Cox et al., 1992; Cribari-Neto et al., 1999; Griliches, 1990; Huggins 2003; Grupp and Mogee 2004; Munda 2005; Wilson and Jones 2002; among others• The proliferation of these indicators is a clear symptom of their importance in policy-making, and operational relevance in macro and micro Searching “Composite indicator” in Google Scholar => 5x increase in 6 years economics in general (Granger, 2001) (Saltelli, 2011)
  4. 4. Pros and cons of CI Pros Cons Can summarize complex or multi-dimensional  May send misleading policy messages if they issues in view of supporting decision-makers. are poorly constructed or misinterpreted. Easier to interpret than trying to find a trend in  May invite simplistic policy conclusions. many separate indicators.  May be misused, e.g., to support a desired Facilitate the task of ranking countries on policy, if the construction process is not complex issues in a benchmarking exercise. transparent and lacks sound statistical or Can assess progress of countries over time on conceptual principles. complex issues.  The selection of indicators and weights could Reduce the size of a set of indicators or include be the target of political challenge. more information within the existing size limit.  May disguise serious failings in some Place issues of country performance and dimensions and increase the difficulty of progress at the centre of the policy arena. identifying proper remedial action. Facilitate communication with general public  May lead to inappropriate policies if (i.e. citizens, media, etc.) and promote dimensions of performance that are difficult to accountability. measure are ignored. (Saisana and Tarantola, 2002; OECD, 2008)
  5. 5. Selected CIs in public e-services field (1/2) Time Number of Composite Aggregation Field/Source coverag countries Sub-indicators Indicator methodology e covered e-Government 191 Web presence 2001- Equal United Nations Readiness Member Telecommunication infrastructure 2010 weighting Index States Human capitaleGovernment 198 Brown e-government 2001 - availability of publications, databases and number of on line Equal Member University index 2007 services weighting States Online sophistication of the 20 basic services (4 stage maturity European e-government 2001 - 32 European model: information available on-line, one-way interaction, Equal Commission / index 2010 Countries two-way interaction and transaction) weighting CapGemini Full online availability of the 20 basic services Service Maturity Breadth (number of services offered through 33 EU the Internet from the 67 identified services) Torres et al. Service Equal 2004 municipaliti Service Maturity Depth (3 stage maturity model: simple (2005) maturity Index weighting es information dissemination, one way communication, service and financial transactions) e-Government 95 Member Equal Kovačić (2005) Readiness 2003 Based on United Nations data and methodology States weighting Index Baldersheim et 2004 75 Nordic Information features of the web sites (refers to the contents al. (2008) municipaliti of communication channels between citizens and town hall) Innovation Equal es Communication features of the web sites (refers to the extent score weighting of interactivity of web sites, or how citizens can actually communicate via municipal sites) 1,176 Italian Multiple Arduini et al. Front Office 2006 municipaliti Availability and level of interactiveness of 266 on line services Correspondenc (2010) Index es e Analysis
  6. 6. Selected CIs in public e-services field (2/2) Number of Time Composite Aggregation Field/Source countries Sub-indicators coverage Indicator methodology coveredeProcurement eProcurement - eNotification, eSubmisssion and eAwards services availability for provided by eProcurement platforms in the public the pre - European 32 European sector award phase Equal 2010 Commission Countries - eOrdering, eInvoicing and ePayment services eProcurement weighting provided by eProcurement platforms in the public availability for sector the post - award phase European Commission – 906 acute - Infrastructure Dimension CompositeeHealth Multivariate Joint Hospitals in the - Application and Integration Dimension index of 2010 Statistical Research 27 European - Information flows dimension eHealth Analysis Centre Countries - Security and privacy dimension deployment (Seville) 2 CountyeTransportation Metropolitan - Real-time network information Advanced Transportation Horan et al. - Whether traffic or transit Travel Equal 2006 Authorities (2007) - Traveler information such as route guidance or Information weighting (Los Angeles destination information Systems Index and Minneapolis)
  7. 7. CIs in public e-services fieldExisting CIs in public eServices field • are specific to a single domain / type of eService • employ simple equal weighting as standard aggregation method (with a few exceptions) • do not assess results with Uncertainty or Sensitivity Analysis (UA – SA)Critical remarks have been raised against ECeGovernment bechmarking index. Criticism is mainlyfocused on theoretical framework, indicators chosen,aggregation scheme adopted (Bannister, 2007; Bretschneider et al,2005; Fariselli & Bojic 2004; Goldkuhl & Persson, 2006; Jansen, 2005)
  8. 8. What is new in our methodology for a Public eServices CI1. Expanding the scope of the analysis of eServices diffusion – A holistic view to capture the wide spectrum of public e-services in different domains (in our case: eGov, eEducation,eTransportation) and the different aspects of service provision (e.g. technical and organizational change within PAs and new service implementation)2. Improving the quality of the framework – Using more sophisticated indicators both on quality of services offered and back office changes – Robustness check of the framework / classification of indicators3. Developing a sound, open and transparent methodology – Asking experts to assess the importance of basic indicators – Real benchmarking: measuring the distance from the efficiency frontier – Tracing back the contribution of the different aspects of eService diffusion (e.g. back- and front-end issues) to intermediate and final indices – Checking the robustness of results by reiterating the calculation of the CI with 12 other different methods (Uncertainty Analysis)
  9. 9. Public e-Services diffusion: a broad definitionAims Dimensions of ICT diffusion Efficiency and effectiveness of public Service provision - front end service (Fountain, 2001; Codagnone e Undheim, Internal processes / interoperability / 2009) information integration - back end (Millard, Transparency (Wong & Welch, 2004; Meyer, 2004; Pardo and Tayi, 2007; OECD, 2007) 2009, Dawes 2010) Decision- / policy –making Participation (Noveck, 2008) (Lampathaki et al., 2010)Providers Channels Government: central / local agencies, Institutional websites, public websites public companies Public kiosks Third party players - PPPs, apps Digital TV development (Brito, 2009; Eaves, 2010) Mobile apps NGOs, citizens - self-help, collaboration (Pieterson et al., 2008) (Noveck, 2008)Data sources Domains Main focus of eGovernment existing CIs / Government eEducation benchmarking Citizens / NGOs / businesses: eTransportation exercises crowdsourcing (Osimo, 2008; Robinson et al., eHealth Scope of our 2009; Chun et al., 2010) analysis Smart cities
  10. 10. Public eServices CI - our framework -• Existing theoretical frameworks are mainly focused on eGovernment and based on stage models implying linear progression (Lee, 2010) [i.e. from stage 1 = input/eReadiness to stage n = outcome] – academic papers (Andersen & Henriksen, 2006; Hiller & Belanger, 2001; Layne & Lee, 2001; Moon, 2002; Siau & Long, 2005; Scott, 2001; West, 2004) – institutional reports (Center for Democracy & Technology, 2002; Grant & Chau, 2005; United Nations, 2001, 2003, 2005, 2008) – private consulting firms reports (Accenture, 2003; Deloitte Research, 2000)• Most available frameworks can hardly be applied to the construction of our CI “Too often composite indicators include both input and output measures. […] However, only the latter set of output indicators should be included if the index is intended to measure innovation performance” (OECD/EC-JRC Handbook on Constructing CIs, p.6)
  11. 11. Public eServices CI - our framework -PILLAR Public eServices Composite Indicator ₋ Mobility ₋ Intranet monitoring ₋ Certified e-mail systems ₋ Interoperability ₋ eProcurement ₋ Interoperability & integration ₋ Document & integration workflow ₋ School website ₋ Travel planner ₋ Restricted areas ₋ Info on traffic for information and parking services ₋ Fully interactive ₋ Multi-channel service provision delivery ₋ Interactive ₋ On line payments ₋ Technology on whiteboards didactics ₋ Multi-channel board of public ₋ Repositories of delivery transport documents ₋ Wiki platforms ₋ electronic displays on the street SUB-PILLAR INDICATORS
  12. 12. Data sourcesDomain Statistical units SourceeEducation 1,600 schools Between. Survey “Service e- Platforms”, 2010eGovernment 5,762 municipalities, 100 Italian Institute of Statistics. Survey Provincial governments “Information and Communication and 22 Regional Technologies in Local Public governments Administrations”, 2009eTransportation 117 local public transport Between. Survey “Service e- companies Platforms”, 2011 Valle d’Aosta and Molise (0,7% of total Italian population) were excluded from the analysis due to poor data quality in the eTransportation survey
  13. 13. Basic indicators selection & robustness check of the framework• An initial set of 30 indicators were assigned to each “pillar” (e-service domain) and “sub-pillar” (aspect of innovation activity being considered)• 8 Principal Component Analyses and KMO tests were performed (1 for each sub-pillar) to check the consistency of the framework• We applied the eigenvalue-one criterion [only one eigenvalue should exceed the unity (Kaiser, 1960)] to make sure that indicators in each sub-pillar share no more than 1 underlying dimension• 6 indicators that did not pass this test have been discarded
  14. 14. Pillar Sub-pillar code BASIC INDICATORS SELECTED E1.1 Teachers using interactive whiteboard ICT in didactics E1.2 Schools extensively using online text and file/document collections E1.3 Schools extensively using wiki platforms eEducation E2.1 Schools with website E2.2 Schools providing restricted access areas for web-based info services to teachers Online Services Schools providing tools to share training aid files on the web (assignments. audio/video of E2.3 lessons. etc.) ICT and changes E3.1 School information system integrated with the National Educational Information System in internal E3.2 Schools information system integrated with the National Library System organization E3.3 Schools with Intranet G1.1 Municipalities with certified e-mail eGovernment ICT and changes in internal G1.2 Municipalities using e-procurement organization G1.3 Municipalities using document workflow (full case handling) G2.1 Municipalities providing fully interactive services on the web Online services G2.2 Municipalities allowing online payments G2.3 Channels other than the web used to offer public services T1.1 No. of technological systems on board ICT during Cities providing information to travelers about traffic or parking by means of electronic eTransportation T1.2 transportation displays T1.3 Buses with on-board computer ICT and changes T2.1 Cities with data interchange with other entities in internal T2.2 Cities with a managing authority for local mobility organization T2.3 CIties with a mobility monitoring system T3.1 No. of channels used to inform passengers Online services T3.2 Cities that provide information to travelers about traffic or parking on the web T3.3 Cities that offer timetables with route planning (travel planner) on the web
  15. 15. Steps for computing CI• What is the relative importance of each Basic Indicator?• How to aggregate the Basic Indicators in order to measure the level of development of each region in eEducation, eGovernment and eTransportation?• How to calculate the final score?• What is the robustness level of the results we obtained?
  16. 16. Gathering expert opinion through Budget Allocation (BA)What is BA?Experts are given a “budget” of N points, to be distributed over anumber of individual indicators by “paying” more for those indicatorswhose importance they want to stress.(Moldan and Billharz 1997) (a) Randomly selected from the corresponding authors of 751 top-journalPhases: articles reviewed by Arduini and Zanfei (2011). => 100 papers extracted.1. Selection of experts for (b) Also included 15 participants at the 1st International EIBURS-TAIPS Conference the evaluation that present papers on eServices diffusion2. Allocation of budget to An on-line questionnaire was administered. indicators Experts were asked to allocate a 100 points budget within each sub-pillar, so that the3. Calculation of weights total number of indicators to evaluate is < 4 (Bottomley et al., 2000)
  17. 17. Results of BA100 80 60 40 20 0 T1.1 T1.2 T1.3 T2.1 T2.2 T2.3 T3.1 T3.2 T3.3 E2.1 E3.2 E1.1 E1.2 E1.3 E2.2 E2.3 E3.1 E3.3 G1.3 G1.1 G1.2 G2.1 G2.2 G2.3 Mean Max Min Median No expert consensus on the appropriate set of weights (Mean coef of var among indicators = 0.4426) – High variation / disagreement – No single pair of expert suggesting similar weights We must choose a statistical method to calculate weights, while trying not to waste the information provided by the experts
  18. 18. Combining Benefit of the Doubt (BoD) approach with expert opinion• BoD is a method for data aggregation based on Data Evelopment Analysis (DEA) (Melyn & Moesen, 1991, Cherchye et al., 2007)• BoD advantages – objective statistical/mathematical approach – it measures “efficiency” => compares a region’s performance with a benchmark in a multi-dimensional space – the algorithm tends to use those indicators where the region shows better performances • no other weighting scheme yields higher composite indicator value (political acceptance) • reveals policy priorities / past choices • embeds concern for regional diversity• BoD + Expert constraint (Cherchye et al., 2008) – We impose that the use of each indicator is limited by expert opinion. The MIN (MAX) use of an indicator corresponds to the MIN (MAX) weight it has received from the experts
  19. 19. Benefit of the Doubt (BoD) approach through Data Envelopment Analysis (DEA)Through DEA we estimate anefficiency frontier used as abenchmark to measure the relativeperformance of regionsIndicator = ratio of the distancebetween the origin and theactual observed point and thatof the projected point in thefrontierIn our case, CIs of the 3 pillarsare the distance from an ideal Source: rearranged from Mahlberg and Obersteiner (2001)case with 100% on all basicindicators
  20. 20. Benefit of the Doubt (BoD) approach through Data Envelopment Analysis (DEA) Linear programming problem j indicates the regions.t. indicators weights i indicates the indicator bounding constraint non-negativity constraint (Charnes et al, 1978)
  21. 21. The “pie-share” constraint• Applying only the bounding and the non-negativity constraints may allow for extreme scenarios (Cherchye L., 2008) – If a region’s value of one sigle indicator dominates those of other regions, that region will get the max score of 1 even if it has very low values in the other indicators• We introduce a pie-share constraint that incorporates expert opinion (Wong and Beasley, 1990) Li = lower bound = MIN expert weight from BA Ui = upper bound = MAX expert weight from BA
  22. 22. Results• In the following slides the resulting scores and ranks from the constrained optimisation are presented• The score: – represents a measure of a region’s efficiency compared to the benchmark (the “ideal case”) – is the sum of the pie-shares of each indicators, that we have grouped toghether at a sub-pillar level (aspect of innovation activity being considered)
  23. 23. 0,80eEducation 0,60 0,40 0,20 - LOM EMR LAZ VEN TOS CAL BOZ SAR PMN PUG ABR MAR LIG CAM UMB BAS FVG SIC TRE 0,50eGovernment 0,40 0,30 0,20 0,10 0,00 EMR BOZ TOS VEN LOM MAR FVG UMB PMN PUG SIC CAM LAZ LIG SAR CAL ABR BAS TREeTransportation 1,00 0,80 0,60 0,40 0,20 - BOZ EMR TRE LIG FVG TOS MAR UMB CAM VEN LOM CAL PMN BAS SAR ABR LAZ PUG SIC Online Services ICT and changes in internal organization ICT in didactics (eEdu) or during transportation (eTran)
  24. 24. Results per pillar (1/4) scores• The highest variation in the scores can be found in eTransportation domain, while eEducation performances seem not to vary much• eGov results for Lombardy, Piedmont and Province of Trento are lower than expected. – This is probably due to the high proportion of very small municipalities
  25. 25. Results per pillar (2/4) rankings• The 3 rankings differ substantially => significantly different regional patterns – Very high variations in the ranking for the Province of Trento and Lazio. Medium-high variation for Lombardy, Calabria, Campania – Other regions show a more homogeneous approach to public eServices development which is characterized by different trajectories of diffusion • High scores for EMR, TOS, BOZ | medium scores for VEN MAR PIE | low scores SIC, BAS
  26. 26. Results per pillar (3/4) pie sharesTracing back pillar results through “pie shares” • eEducation - Pie shares are more or less fixed, i.e. all regions use the same “mix” of indicators to maximize their score, under the expert constraint. This is due to quite similar relative values of each indicator and to the specific combination of bounds that experts have imposed • eTransportation – Pie shares are flexible, so each region chooses its own set of weights revealing the areas where investments have been made • eGovernment – intermediate case
  27. 27. Results per pillar (4/4) pie shares• Indicators related to ICT diffusion in internal processes and organizational changes have a major role in computing the final score of all public e- services categories (eEdu, eGov and eTra)• The importance of back office re-organization through ICTs has emerged in the literature on the development of organizations, which has emphasized the essential role of skills that characterize the different components of an organizational structure (Fountain and Osorio-Ursua, 2001; Fountain, 2003; West, 2005; Helfat et al., 2007)
  28. 28. Final steps to the CI1. Normalization: MIN-MAX, where MAX is the region with the highest score2. Final aggregation through Geometric Mean – the marginal gain of an increase in a low score is much higher than in a high score – a region has more incentive to address the dimensions where it is weak
  29. 29. Final scoresand rankRegion CI RankEMR 0,94 1 BOZ 0,93 2 TOS 0,80 3 VEN 0,73 4 FVG 0,70 5MAR 0,69 6 LIG 0,68 7 LOM 0,67 8UMB 0,65 9CAM 0,63 10PMN 0,59 11 CAL 0,57 12 TRE 0,53 13 LAZ 0,52 14 PUG 0,52 15 SAR 0,51 16 ABR 0,45 17 BAS 0,45 18 SIC 0,38 19
  30. 30. Uncertainty Analysis (UA)• UA is a robustness assessment of a CI (Saltelli et al, 2008)• The uncertainties in the development of a composite indicator will arise from some or all of the steps in the construction line (Saisiana et al, 2004) (a) selection of subindicators (b) data selection (c) data editing (d) data normalization (e) weighting scheme (f) weights values and (g) composite indicator formula (e) level of aggregation where the methodology applies
  31. 31. 12 alternative scenarios (+ baseline) weighting scheme level of the method Aggregation Data normalization DEA Pie shares (min-maxS1 BA) domains Geometric No rescalingS2 BA mean weight+EW+EW sub-pillars Additive MinmaxS3 BA median weight+EW+EW sub-pillars Additive Minmax Additive, geometric onS4 BA mean weight+EW+EW sub-pillars domains Minmax Additive, geometric onS5 BA median weight+EW+EW sub-pillars domains MinmaxS6 DEA Pie shares (min-max BA) domains Additive No rescalingS7 EW - Additive Minmax Additive on pillars andS8 PCA+EW+EW sub-pillars domains Minmax Additive on pillars,S9 PCA+EW+EW sub-pillars geometric on domains MinmaxS10 PCA+PCA+EW sub-pillars+pillars Additive Minmax Additive, geometric onS11 PCA+PCA+EW sub-pillars+pillars domains MinmaxS12 PCA+PCA+PCA sub-pillars+pillars+domains Additive Minmax Additive, geometric onS13 PCA+PCA+PCA sub-pillars+pillars+domains domains Minmax
  32. 32. Results of UA1.000.900.800.700.600.500.400.300.200.100.00 UMB MAR SAR EMR ABR VEN CAM PMN TRE LOM CAL FVG PUG LAZ BOZ BAS TOS SIC LIG Lowest Highest uncertainty: uncertainty: range = 0.367 range = 0.167
  33. 33. Differences in rankings compared to baseline scenarioRegions S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S Borda S CondorcetPMN -3 -3 -3 -3 0 -1 -2 -2 0 0 2 0 -1 0LOM 4 4 4 4 1 4 4 4 5 5 5 3 5 5BOZ 0 0 0 0 0 0 0 0 0 0 0 0 0 0TRE -2 0 -3 -3 1 -3 -3 -3 -2 -3 -3 -5 -2 -2VEN -1 -1 -1 -1 0 -1 -1 -1 -1 -1 0 1 -1 -1FVG -1 -1 -1 -1 0 -1 -1 -1 -1 -1 -2 -2 -1 -2LIG -3 -3 -3 -3 1 -2 -3 -4 -5 -5 -6 -6 0 1EMR 0 0 0 0 0 0 0 0 0 0 0 0 0 0TOS 0 0 0 0 0 0 0 0 -1 -1 -2 -1 -1 -1UMB 0 0 1 1 0 1 1 1 -1 1 -1 0 1 1MAR -1 -1 -1 -1 -2 -1 -1 -1 -1 -1 0 0 -4 -6LAZ 6 6 5 5 0 4 5 5 6 4 6 6 5 4ABR 0 0 0 0 0 0 0 0 0 0 0 1 0 0CAM -3 -5 -3 -3 0 -5 -1 0 1 1 -2 -2 -4 -4PUG 4 3 4 3 0 4 1 1 2 2 4 5 4 6BAS 0 0 0 0 0 0 0 0 -1 -1 -1 -1 0 0CAL 0 1 0 1 -1 -2 0 0 -2 -2 -2 -2 -1 -4SIC 0 0 0 0 0 0 0 0 1 1 1 2 0 0SAR 0 0 1 1 0 3 1 1 0 1 1 1 0 3
  34. 34. Results of UA• CI final scores based on BoD weights are among the best possible results a region can obtain• Good robustness level, especially for top and bottom ranked regions – 13 regions out of 19 show only a 0/1/-1 shift compared to the median rank
  35. 35. Conclusions 1/2From a methodological point of view – BoD approach combined with BA is an effective way incorporate both regional choices and expert judgment into CI – Geometric aggregation gives higher scores to regions showing a more balanced eServices diffusion among the 3 domains – Uncertainty analysis on rankings shows high robustness levels for top and bottom ranked regions
  36. 36. Conclusions 2/2Main findings and implications from our analysis: – ranking reflects hierarchy of regions in terms of per capita income and industrial development: current development of public eServices does not seem to correct unbalances between regions lagging behind and frontrunners – high heterogeneity in terms of mix of e-service proficiency: need for a regional differentiation of e-service promotion policies ; – there is more cross regional variation in terms of eEducation and eTransportation than in terms of eGov: human capital formation and mobility enhancing are bound to be the real distinctive assets of regions