SlideShare a Scribd company logo
1 of 184
Download to read offline
SOFTWARE MEASUREMENT
       EUROPEAN FORUM




          PROCEEDINGS

          28 - 29 May 2009

NH LEONARDO da VINCI, ROME (ITALY)




                 EDITOR
               Ton Dekkers
         Galorath International Ltd.

              COVER PHOTO
               Ton Dekkers
Proceedings 6th Software Measurement European Forum, Rome 2009
Proceedings 6th Software Measurement European Forum, Rome 2009


CONFERENCE OFFICERS

Software Measurement European Forum 2009

Conference Manager
Cristina Ferrarotti, Istituto Internasionale di Ricerca Srl., Italy

Conference Chairperson
Roberto Meli, DPO - Data Processing Organization, Italy

Program Committee Chairperson
Ton Dekkers, Galorath International Ltd., The Netherlands

Program Committee
Silvia Mara Abrahão, Universidad Politecnica de Valencia, Spain
Prof. Alain Abran, École de Technologie Supérieure / Université du Québec, Canada
Dr. Klaas van den Berg, University of Twente, The Netherlands
Dr. Luigi Buglione, ETS - Université du Québec / Nexen (Gruppo Engineering), Italy
Manfred Bundschuh, DASMA e.V, Germany
Thomas M. Cagley Jr., David Consulting Group Inc, U.S.A.
Prof. Gerardo Canfora, University of Sannio, Italy
Carol Dekkers, Quality Plus Technologies, Inc, U.S.A.
Prof. Dr. Reiner Dumke, University of Magdeburg, Germany
Dr. Christof Ebert, Vector Consulting, Germany
Cao Ji, China Software Process Union, People Republic China
Dr. Thomas Fehlmann, Euro Project Office AG, Switzerland
Pekka Forselius, 4SUM Partners, Finland
Dan Galorath, Galorath Inc, U.S.A.
Harold van Heeringen, Sogeti Nederland B.V., The Netherlands
Rob Kusters, Eindhoven University of Technology / Open University, The Netherlands
Andy Langridge, riskHive Ltd, U.K.
Dr. Nicoletta Lucchetti, SOGEI, Italy
Sandro Morasca, Università dell’Insubria, Italy
Pam Morris, Total Metrics, Australia
Dr. Jürgen Münch, Fraunhofer IESE, Germany
Marie O’Neill, COSMIC, Ireland
Serge Oligny, Bell Canada, Canada
Habib Sedehi, University of Rome, Italy
Charles Symons, United Kingdom
Frank Vogelezang, Sogeti Nederland B.V., The Netherlands
Pradeep Waychal, Patni Computer Systems Ltd, India
Hefei Zhang, Samsung, People Republic China




                                               i
Proceedings 6th Software Measurement European Forum, Rome 2009




                             ii
Proceedings 6th Software Measurement European Forum, Rome 2009




    The Software Measurement European Forum 2009 is the sixth edition of the successful
event for the software measurement community held previously in Rome in 2004, 2005, 2006
and 2007. In 2008 SMEF was located in Milan, one of the most important European business
cities. Now we’re back in Rome.
    The two Italian market and competence leading companies, Istituto Internasionale di
Ricerca (http://www.iir-italy.it) and Data Processing Organization (http://www.dpo.it), that
have managed the previous successful editions, continued the collaboration with Ton
Dekkers (Galorath International) in the Program Management.
    SMEF aspires to remain a leading worldwide event for experience interchange and
knowledge improving on a subject, which is critical for the ICT governance.

   The Software Measurement European Forum will provide an opportunity for the
publication of the latest researches, industrial experiences, case studies, tutorials, best
practices on software measurement and estimation. This includes measurement education,
methods, tools, processes, standards and the practical application of measures for
management, risk management, contractual and improvement purposes. Practitioners from
different countries will share their experience and discuss current problems and solutions
related to the ICT Governance based on software metrics.

  As the 2009 event focus is chosen:
  “Making software measurement be actually used“

   Although software measurement is a young discipline, it has quickly evolved and many
problems have been adequately solved in conceptual frameworks. Available methods,
standards and tools are certainly mature enough to be effectively used to manage the software
production processes (development, maintenance and service providing). Unfortunately only
few organisations, compared to the potential interested target, have brought the measurement
process from theory to practice. Therefore the request for contributions related to
implementation in order to analyse causes and possible solutions to the problem of
measurement as a “useful but not used” capability. Papers are welcome to provide guidance
to establish and actually using a roadmap for covering the distance from theory to practice.




                                             iii
Proceedings 6th Software Measurement European Forum, Rome 2009


   With this in mind, the Program Committee asked for papers focusing on the following
areas:
    • Focus Area: Making software measurement be actually used.
    • How to integrate Software Measurement into Software Production Processes.
    • Measurement-driven decision making processes.
    • Data reporting and dashboards.
    • Software measurement and Balanced Scorecard approaches.
    • Service measurement methods (SLA and KPI).
    • Software measurement in contractual agreements.
    • Software measurement and standard process models (ITIL, ASL, ...).
    • Software measurement and Software Maturity Models (CMM-I, SPICE, Six Sigma,
       ...).
    • Software measurement in a SOA environment.
    • Functional size measurement methods (IFPUG and COSMIC tracks).
    • (Software) Measurement to control risks.
    • Software Reuse metrics.

   It’s quite ambitious to cover al of this, but we are confident that the presentations on the
conference and the papers in the proceedings will cover most of the issues. The angles chosen
vary from theory to practise and from academic to pragmatic. This interesting mix of
viewpoints will offer you a number of potential (quick) wins to take back.


  Cristina Ferraroti
  Roberto Meli
  Ton Dekkers

  May 2009




                                              iv
Proceedings 6th Software Measurement European Forum, Rome 2009


TABLE OF CONTENT




DAY ONE - MAY 28

     KEYNOTE
     How to bring "software measurement” from research labs to the operational
     business processes
     Roberto Meli

     An empirical study in a global organisation on software measurement in Software
1    Maturity Model-based software process improvement
     Rob Kusters, Jos Trienekens, Jana Samalikova

     Practical viewpoints for improving software measurement utilisation
11   Jari Soini

     IFPUG function points or COSMIC function points?
25   Gianfranco Lanza




                                        v
Proceedings 6th Software Measurement European Forum, Rome 2009


DAY TWO - MAY 29


      Personality and analogy-based project estimation
37    Martin Shepperd, Carolyn Mair, Miriam Martincova, Mark Stephens

      Effective project portfolio balance by forecasting project’s expected effort and its
49    break down according to competence spread over expected project duration
      Gaetano Lombardi, Vania Toccalini

      Defect Density Prediction with Six Sigma
59    Thomas Fehlmann

      Using metrics to evaluate user interfaces automatically
67    Izzat Alsmadi, Muhammad AlKaabi

      Estimating Web Application Development Effort Employing COSMIC: A
77    Comparison between the use of a Cross-Company and a Single-Company Dataset
      Filomena Ferrucci, Carmine Gravino, Sergio Di Martino, Luigi Buglione

      From performance measurement to project estimating using COSMIC functional
91    sizing
      Cigdem Gencel, Charles Symons

      A ‘middle-out’ approach to Balanced Scorecard (BSC) design and implementation
105   for Service Management: Case Study in Off-shore IT-Service Delivery
      Srinivasa-Desikan Raghavan, Monika Sethi, Dayal Sunder Singh, Subhash Jogia

      FP in RAI: the implementation of software evaluation process
117   Marina Fiore, Anna Perrone, Monica Persello, Giorgio Poggioli

      KEYNOTE
129   Implementing a Metrics Program: MOUSE will help you
      Ton Dekkers

APPENDIX

        Program / Abstracts
143

        Author’s affiliations
159




                                          vi
Proceedings 6th Software Measurement European Forum, Rome 2009




                            vii
Proceedings 6th Software Measurement European Forum, Rome 2009




                            viii
Proceedings 6th Software Measurement European Forum, Rome 2009


     Results of an empirical study on measurement in Maturity
            Model-based software process improvement
                        Rob Kusters, Jos Trienekens, Jana Samalikova


Abstract
   This paper reports on a survey amongst software groups in a multinational organisation.
The paper presents and discusses metrics that are being recognised in the software groups.
Its goal is to provide useful information to organisations with or without a software
measurement program. An organisation without a measurement program may use this paper
to determine what metrics can be used on the different CMM levels, and possibly prevent that
the organisation makes use of metrics which are, not yet, effective. An organisation with a
measurement program may use this paper to compare their metrics with the metrics
identified in this paper and eventually add or delete metrics accordingly.


1. Introduction
   Although both CMM and its successor model CMMI address the measurement of process
improvement activities on its different levels, it does not prescribe how software
measurement has to be developed, and what kind of metrics should be used. In accordance
with experiences from practice, software measurement requires well-defined metrics, stable
data collection, and structured analysis and reporting processes [10], [4]. The use of
measurement in improving the management of software development is promoted by many
experts and in literature. E.g. the SEI provides sets of software measures that can be used on
the different levels of the Capability Maturity Model, see [1]. In [7] and an approach is
presented for the development of metrics plans, and also gives an overview of types of
metrics (i.e. project, process, product) that can be used on the different levels of maturity. To
investigate the status and quality of software metrics in the different software groups at
Philips, its Software Process Improvement (SPI) Steering Committee decided to perform an
empirical research project in the global organisation.

   The goal of this paper is to identify, on the basis of empirical research data, the actual use
of metrics in the software development groups of the Philips organisation. The data have
been collected in the various software groups of this organisation. The main research
questions are respectively:
    • What types of metrics are recommended in literature regarding their usage on the
       different CMM-levels?
    • What types of metrics are used in software groups?
    • What types of metrics are used in the software groups on the different CMM levels?
    • How do the empirical research findings on metrics usage contribute to literature?

   In Section 2 of this paper relevant literature on the subject is addressed. Section 3 presents
the approach that has been followed to carry out the survey. Some demographics of the
survey are presented in section 4. Section 5 addresses the results of the survey and section 6
presents the conclusions.




                                               1
Proceedings 6th Software Measurement European Forum, Rome 2009


2. Usage of metrics versus the levels of the CMM: a literature overview
   As the maturity level of a software development organisation increases, additional
development issues should be addressed, e.g. project management, engineering, support, and
process management issues, see e.g. [1]. To monitor and control these issues, it would seem
reasonable that these additional issues would prompt the usage of additional metrics. Based
on this consideration a literature review on the relationship between CMM levels attained and
the type of metrics used has been carried out. Surprisingly, only a few papers were found that
deal with this issue. We will discuss them in the following of this section.

   In [8] a straightforward approach is taken that any software development organisation
requires five core metrics: time, effort, size, reliability and process productivity. The authors
don't address in their book a possible connection between number and type of metrics and the
CMM-levels, however they state that these five core metrics should form a sufficient basis
for control. From a CMM perspective 'control’ is a level 2 issue, and one could deduce, that
in essence the five metrics are needed for monitoring and control on the so-called Managed
level 2. However, also on the Initial level 1, organisations will at least aspire towards control,
and these five metrics will also be useful on this level 1. Other authors do differentiate
between metric usage and the CMM level attained. We will discuss a number of publications
in the following. We would like to keep in mind the logical assumption, that when a
particular type of metrics is advised for a certain level, it obviously is also useful at the higher
CMM levels.

   The Software Engineering Institute, i.e. [1] provides guidelines for metric usage related to
the maturity level attained. For level 1 no guidelines are provided. Towards level 2, project
metrics (progress, effort, and cost) and simple project stability metrics are advocated. At level
2 recording of planned and actual results on progress, effort and cost are required. Towards
level 3 ranges are added for the project metrics, and at level 4 so-called 'control limits'
complete the total project control picture. At level 2 also product quality metrics are
introduced, to which at levels 3 and 4 metrics are added to, similarly as with regard to the
foregoing project metrics, to provide the options to go from recording or registration of
product quality aspects towards explanation and control. Further, starting from level 2
computer resource metrics are advocated, and starting from level 3 training metrics.
Surprisingly these computer resource and training metrics are not mentioned in any of the
publications that we investigated. At level 5, finally, metrics are added that should indicate
the degree of process change on a number of issues, together with the actual and planned
costs and benefits of this process change.

   In [7] a direct line is drawn between the CMM level attained and the metrics that should
be used. Metrics are to be implemented step by step in five levels, corresponding to the
maturity level of the development process. For level 1 some elementary project metrics (such
as product size and staff effort) are advised, in order to provide a baseline rate of e.g.
productivity. These baseline metrics should provide a basis for comparison as improvements
are made and maturity increases. Logically deducing metrics requirements from the
characteristics of each level, they indicated further that level 2 requires project related
metrics (e.g. software size, personnel effort, requirements volatility), level 3 requires product
related metrics (e.g. quality and product complexity), and level 4 so-called process wide
metrics (e.g. amount of reuse, defect identification, configuration management, etc.). No
recommendations are provided for level 5.



                                                 2
Proceedings 6th Software Measurement European Forum, Rome 2009


   In [11] also some advice is provided and in essence the SEI approach is followed by these
authors. However, the notion of base lining, advocated by [7], is adopted by them. So they
advocate using baseline metrics at level 1, as what they refer to as post shipment metrics (e.g.
defects, size, cost, schedule). At level 2 they, similar to [7], aim at project control, but also,
similar to [1], they include product quality (e.g. defect) information. On levels 3 and 4 they
provide similar advice as [1]. However, at level 4 they add base line metrics for process
change, to support with baseline data the process change activities and associated metrics at
level 5, e.g. on transfer of new technology.

   Some differences between these approaches that can be identified are:
    • In [7] the usage of baseline metrics for level 1 is advised, In [1] it is refrained from
      doing so, but [10] takes up this point.
    • In [7] product quality metrics are discussed starting from level 3, while in [1] this is
      started at level 2.
    • In [7] process wide metrics are introduced at level 4, while in [1] the authors wait till
      level 5. In [11] the same position as in [1] is taken, but adds to this baseline metrics at
      level 4.

   Based on these papers, we can deduce the following framework of assumptions for the
usage of metrics on the different CMM-levels of maturity:
   • Level-1 organisations will focus on baseline metrics.
   • Level-2 organisations will add to this project related metrics and, according to [11]
       and [1], also product quality metrics. However the usage of the latter type of metrics
       on this level is not in accordance with [7].
   • Level-3 will certainly contain product metrics, both on quality and complexity.
   • Level-4 is less clear. According to [7] 'process wide' metrics will be included. In [11]
       it is agreed on this, but only at a base line level.
   • Level-5 will definitely require process wide' metrics.

   Summarising it will be clear that we can expect the number and diversity of metrics to
increase with the maturity level.

   From literature some survey based evidence is available. In [5] it is shown that elementary
metrics, such as size, cost, effort en lead time are used at the lower maturity levels and also
show that they precede the usage of more complex metrics, such as on product quality and
review effectiveness at the higher maturity levels. Similar results were found in [2]. In [6]
metric usage in high maturity organisations is looked at. Metrics regarding size, effort,
schedule, defects and risk were uniformly found. With the exception of ‘risk’ these metrics
correspond directly to the core metrics as identified in [8] and the baseline metrics as
suggested by [11]. Surprisingly, although metrics were used for process analysis and
improvement, in [6] no specific process improvement oriented metrics are mentioned.

   In this paper we will use survey data to compare these with our framework of
assumptions.




                                                3
Proceedings 6th Software Measurement European Forum, Rome 2009


3. The survey
    The survey was carried out in the software groups of Philips. In these software groups,
software is developed for a large product portfolio, ranging from Shavers to TVs to X-Ray
equipment. Software size and complexity are increasing rapidly and the total software staff is
growing continuously, from less than 2000 in the early 90s to more than 5000 now. More and
more software projects require more than 100 staff-years located on multiple development
sites. The software groups also have to deal with the fact that the share of software developed
by third parties is rapidly increasing.

   Software Process Improvement (SPI) in Philips started in 1992. CMM is used for software
process improvement (SPI), because CMM is well defined and the measurement of the
process improvements is done by objective assessors, supported by jointly developed CMM
interpretation guidelines [9]. Although Philips has switched to the successor model CMM
Integration (CMMI), the data discussed in this paper refer to the original CMM. Within
Philips, the software capabilities and achievements are measured at corporate level by
registering the achieved CMM levels of the company’s software groups. As a result the
measurements are comparable across the organisation and the software process capability
(CMM) level of each individual software group is known. Philips developed a cross-
organisational infrastructure for SPI. From the organisation perspective SPI is organised top-
down as follows: Core Development Council, SPI Steering Committee, SPI Coordinators and
Software Groups. From the communication perspective the following events and techniques
can be recognised, respectively a formal Philips SPI Policy, a bi-annual Philips Software
Conference, temporary Philips SPI Workshops, a Philips SPI Award, an internal SPI Web
Site, and so-called SPI Team Room sessions.

   The research group used brainstorming sessions and discussions to develop a number of
questions regarding metrics usage. We asked: "which of the following quality
measurements/metrics do you currently use". For this a list of 17 predefined metrics was
provided (with a yes/no scale) that has been derived from earlier research sponsored by the
SPI Steering Committee of the global Philips organisation. To this list software groups could
add their own metrics. A drawback of this approach was, that no direct link to previous
surveys (e.g. as reported in [2]) could be established. We felt that this drawback was
compensated by the usage of terms that were familiar to the participants.

4. Survey demographics
   This section gives briefly some survey demographics. The survey was aimed at group
management and was completed by them. The number of responding software groups was 49
out of 74. This means a, very satisfactory, response of about 67 %. Table 1 shows the global
distribution over the continents of the responding software groups. Table 2 presents the
CMM-levels that have been achieved in the organisation.

          Table 1: Distribution of responding software groups over the continents.
                      Continent            Number of software groups
                      Europe                          32
                      Asia                              8
                      America                           9
                      Total                           49




                                              4
Proceedings 6th Software Measurement European Forum, Rome 2009

               Table 2: Number of software groups on the different CMM-levels.
              CMM- level        Number of groups      Average # of metrics reported
                   1                  20                           6.2
                   2                  13                           6.3
                   3                  10                          11.5
                   4                  0                            n.a.
                   5                   3                          18.0
              Not reported             3
                 Total                49

   Table 2 shows that more than 50 % of the software groups that completed the survey
succeeded in leaving the lowest level one (initial software development) of the CMM. It is
also clear, that the number of metrics reported increases substantially starting from level 3.

5. Results of the survey
   This section introduces and discusses the results of the survey. Apart from reacting to the
predefined metrics, most respondents added (often significant numbers of) self described
metrics. In order to be able to manage these results, the data were classified. The
classification used, is derived from existing classifications such as presented in [1] and [7].
   The two-level categorisation used is presented in table 3. The results of the survey in these
terms are presented in table 4.

                                  Table 3: Categories of metrics.
 Categories     Project      Product       Process     Process        Computer      Training
                                                       improvement    utilisation
 Sub            Progress     Quality       Quality     Reuse          n.a.          n.a.
 categories     Effort       Stability     Stability Improvement
                Cost         Size
                             Complexity

                              Table 4: Frequencies per metric class.
   Categories                 Freq.    Percent Subcategories         Freq.          Percent
   project                    179         45.4 progress                80             20.3
                                                 effort                97             24.6
                                                 cost                   2              0.5
   product                    190         48.2 quality                137             34.8
                                                 stability             15              3.8
                                                 size                  38              9.6
   process                       8         2.0 quality                  4              1.0
                                                 stability              4              1.0
   process improvement          13         3.3 reuse                    8              2.0
                                                 improvement            5              1.3
   computer                      0         0.0 utilisation              0              0.0
   Training                      1         0.3 training                 1              0.3
   Other                         3         0.8 other                    3              0.8
   Total                      394        100.0 total                  394            100.0




                                                5
Proceedings 6th Software Measurement European Forum, Rome 2009


   Three metrics were reported that fell outside this categorisation. These were ‘employee
satisfaction’, ‘patents approved’ and ‘PMI’. Computer, with no metrics reported, and
Training, with only a single metric, score extremely low. These can be discounted from the
further analysis. The large majority of the other 391 metrics fall within the classes ‘project’
and ‘project’, which together account for 93,6%. Process and process improvement related
metrics score low, with only 5.3% of metrics.

   According to [8], all organisations should at least collect the five core metrics time, effort,
reliability, size, and productivity. Given that productivity can be seen as a combination of
size and effort, table 5 shows the results on this requirement. An organisation scored on a
category if at least one metric was present in that category.

    Table 5: organisations (number and percentage) collecting data on five core metrics.
CMM Project           Project        Product       Product       Productivity    All core
level progress        effort         quality       size                          metrics
  1     14 (82%)      14 (82%)       13 (76%)      12 (71%)      11 (65%)        8 (47%)
  2     11 (85%)      13 (100%)      9 (69%)       7 (54%)       7 (54%)         5 (38%)
  3     10 (100%)     10 (100%)      10 (100%)      9 (90%)      9 (90%)         9 (90%)
  5     3 (100%)      3 (100%)       3 (100%)      3 (100%)      3 (100%)        3 (100%)

   All level 5 and nearly all level 3 organisations fulfill this requirement. However, less than
half of level 1 and level 2 organisations do so. The main missing metric seems to be product
size. However, size can always be measured retroactively and sufficient automated support is
available to support this. Taking this into account, all level 3 and 5 organisations, most (69%)
of level 2 organisations, and about half (53%) of level 1 organisations fulfill this requirement.
Apparently, starting from level 2, this requirement is largely met, providing these
organisations sufficient (at least according to [8]) information for control.
   A next interesting step, is to compare the type of metric collected, with the requirements
as formulated in the literature survey presented above, i.e. [1], [11] and [7]. According to this
we would expect (given that we have no data for level 4):
    • Level-1 organisations will focus on baseline metrics.
    • Level-2 organisations will add to this project related metrics and, according to [11]
        and [1], also product quality metrics.
    • Level-3 will contain product metrics, both on quality and complexity.
    • Level-5 will also require process wide metrics.

   Table 6 contains numbers of metrics found per sub-category and per CMM-level, and can
be used to check these expectations.




                                                6
Proceedings 6th Software Measurement European Forum, Rome 2009

                Table 6: Number of metrics per sub-category per CMM-level.
               Metric subcategory                      CMM level
                                           1       2       3      5   Total
               project progress           20      24     27       6    77
               project effort             28      27     23      13    91
               project cost                0       1       1      0     2
               product quality            43      17     47      18   125
               product stability           5       2       7      0    14
               product size               14       8       9      3    34
               process quality             0       0       0      4     4
               process stability           3       1       0      0     4
               process reuse               3       3       2      0     8
               process improvement         1       0       1      3     5
               Total                     117      83    117      47   364

    It is clear that level-1 organisations do not adhere to the expectations as formulated in
literature. We see a wide spread usage of all types of metrics. If we combine this with the less
than stellar adherence to the principle of five metrics, as formulated in [8], we can conclude,
that level-1 one organisations usage of metrics is in line with its CMM-level. It is ‘ad hoc’ or
even ‘chaotic’. The lack of project control at this level apparently spills over in the lack of
consideration of which metrics to use.

   At level 2 we see a more concentrated usage of metrics. As expected, many metrics focus
on project matters. We also see a significant number of product metrics. This fits in with
requirements as provided in [1] and at first glance contradicts those made in [7]. However,
the CMM model is a growth model. It is to be expected that organisations at level 2 are
involved in a transition process towards level 3. Usage of product metrics can therefore also
be seen in this light, supporting the assumptions made in [7]. And given the relatively low
number of product metrics encountered (just over one per organisation), this explanation may
seem more likely. Some scattered process metrics are found, but in such low numbers (less
than 0.25 of this type of metric per organisation) that we feel we can say that on average level
2 organisations select metrics in a such a way that is proper for their position in the CMM
model. This is supported by the high level of usage of the five core metrics as advocated in
[8].

   Looking at level 3 organisations a similar picture can be seen. We see extensive usage of
project metrics. As can be expected for level 3, usage of product metrics is also extensive.
This is exactly in line with what would be expected at this level. However, somewhat more
surprising, almost no process metrics are reported. If we take the line of reasoning mentioned
before, that level x organisations will be on the road to level x+1, and therefore report metrics
that fit in with this higher level, a somewhat higher number would be expected. This can
mean that the participating organisations have just reached level 3 and have not yet started
the next step or that level-3 is considered a sufficient target and no further progress is
envisaged. Data show, that the average level-3 organisation has had a SPI program in place
for 6 years on average. All process metrics that were encountered, were found in those
organisations that had their SPI program in place for 8 years on average. The remainder had
their program in place for 5 years on average. This would seem to suggest that the first
explanation is the most likely one. All level three organisations gather the core metrics as
suggested in [8]. All together this suggests that level 3 organisations, as do level 2

                                               7
Proceedings 6th Software Measurement European Forum, Rome 2009


organisations, follow a well considered approach towards metric usage that fits in with their
position in the CMM maturity model.

   No data are available for level 4 organisations. In [7] usage of reuse metrics at this level is
indicated. In the survey, reuse metrics where included as part of the 17 pre-defined metrics.
Eight organisations indicated that they used this type of metric. Three of these were level 1,
three, level 2 and two, level 3. None of these were level 5, as would have been expected
according to [7]. Apparently usage of reuse metrics is not a characteristic of level 4
organisations.

   Only limited data are available for level 5 organisations. However these data suggest a
similar result as was obtained at level 3. Metrics usage almost doubles for level 5
organisations, in line with the results as reported in [6]. And although only a few process
metrics are reported, two out of three do use them. Furthermore, this limited number of
organisations does account for one third (7) of all (21) process related metrics. On average
each level 5 organisations uses 2.3 process related metrics, as opposed to 0,3 process related
metric per organisation for the other organisations. And, as in [6] is suggested, considerate
usage of project and product metrics can provide insight in the development of the process.
Finally, all level 5 organisations collect data for the five core metrics. All this again suggests
a carefully considered usage of metrics by these organisations.

   In absolute numbers the amount of process related metrics (21) is small. Only 5.3% of
metrics fall into this category. A higher number would have been expected, especially for the
organisations from levels 3 and 5. It is true, that level 5 organisations do show a higher
number. On average each level 5 organisations uses 2.3 process related metrics, as opposed
to 0.3 for the other organisations. This definitely shows that this type of metric is used more
at higher levels of maturity, but still this is not a very high usage.

6. Conclusions
   The main results of the analysis are as follows:
    • On each of the CMM levels, software groups make use of a quite large number of
       project metrics (i.e. to measure progress, effort and cost).
    • Software groups on CMM level 1 (the Initial process level) make, or try to make, use
       of a large diversity of metrics also from the categories product and process. This is in
       contradiction with recommendations from literature which stress the usage of project
       metrics (mainly to provide base line data as a start for further improvement).
    • On CMM level 2 (the Managed process level) software groups are definitely
       restricting themselves to project metrics; the usage of product metrics shows a sharp
       decrease. Obviously software groups on this level know better what they are doing, or
       what they should do, i.e. project monitoring and controlling. As a stepping stone
       towards level three, also a number of product quality metrics can be found here.
    • Software groups on CMM level 3 (the Defined level) clearly show a strong increase
       in product quality measurement. Obviously, this higher level of maturity reached
       enables software groups to effectively apply product metrics in order to quantitatively
       express product quality.
    • Remarkable is that the analysis of the metrics showed that software groups on CMM
       level 3 and above, don't show a strong increase in the usage of process related
       metrics. The main instruments in software measurement to reach a satisfactory
       maturity level seem to be the project control and product quality metrics.

                                                8
Proceedings 6th Software Measurement European Forum, Rome 2009



   The research shows a clear emphasis on project progress and product quality metrics in
software groups that are striving at a higher level of maturity. Interesting is also to see that on
each of the first three CMM levels software groups make use of about the same high amount
of project metrics. So project control is important on all levels. Surprisingly, and in
contradiction with literature, process metrics are hardly used (even by software groups on the
higher CMM levels).

7. References
[1]    Baumert J.H., M.S. McWhinney, Software Measures and the Capability Maturity Model, Software
       Engineering Institute, Technical Report CMU/SEI-92-TR-25, Carnegie Mellon University, Pittsburgh,
       1992
[2]    Brodman, J.G. and Johnson, D.L., Return on investment from software process improvement as
       measured by US industry, Crosstalk, april 1996.
[3]    Ebert C., Dumke R., Software Measurement, Establish, Extract, Evaluate, Execute, Springer-Verlag
       Berlin Heidelberg, 2007.
[4]    El-Emam. K., Goldenson. D., McCurley. J., Herbsleb. J., 2001. Modeling the Likelihood of Software
       Process Improvement: An Exploratory Study. Empirical Software Engineering, Vol. 6, Nr 3, pp 207
       – 229.
[5]    Gopal, A, Krishnan, M.S., Mukhopadadhyay, T, and Goldenson, D., Measurement programs in software
       development: determinants of success, IEEE Transactions on Software Engineering, vol. 28, no. 9. Pp.
       863-875, 2002.
[6]    Jalote, P., Use of metrics in high maturity organisations, Software Quality Professional, Vol. 4, No. 2,
       March 2002, pp. 7-13.
[7]    Pfleeger, S. L. and McGowan, C., Software Metrics in the Process Maturity Framework. Elsevier
       Science Publishing Co., Inc., 1990.
[8]    Putnam L.H., Myers, W., Five Core Metrics: The Intelligence Behind Successful Software Management,
       Dorset House Publishing, 2003
[9]    SEI: Software Engineering Institute, Carnegie Mellon University, June 1995. The Capability Maturity
       Model: Guidelines for Improving the Software Process, Addison Wesley Professional, Pages: 464,
       Boston, USA.
[10]   Trienekens J.J.M., 2004. Towards a Model for Managing Success Factors in Software Process
       Improvement. Proceedings of the 1st International Workshop on Software Audit and Metrics (SAM)
       during ICEIS 2004, Porto. Portugal, pp. 12-21.
[11]   Walrad C, Moss E., Measurement: the key to application development quality. IBM
[12]   Systems Journal 32[3], 445-460. Riverton, NJ, USA, 1993.




                                                       9
Proceedings 6th Software Measurement European Forum, Rome 2009


SMEF 2009




                                   10
Proceedings 6th Software Measurement European Forum, Rome 2009


     Practical viewpoints for improving software measurement
                             utilisation
                                           Jari Soini


Abstract
   An increasing need for measurement data used in decision-making on all levels of the
organisation has been observed when implementing the software process, but on the other
hand measuring the software process in practice is utilised to a rather limited extent in
software companies. Obviously there seems to be a gap between need and use on this issue.
In this paper we present the viewpoints on measurement issues of Finnish software
companies and their experiences. Based on the empirical information obtained through
interview sessions, this paper discusses the justification as well as potential reasons for
challenging measurement utilisation in the software process. The key factors are also
discussed that, at least from the users’ perspective, should be carefully taken into account in
connection with measurement and also some observed points that should be improved in
their current measurement practices. These results can be used to evaluate and focus on the
factors that must be noted when trying to determine the reasons why measurement is only
utilised to a limited extent in software production processes in practice. This information
could be useful when trying to solve the difficulties observed in relation to measurement
utilisation and thus advancing measurement usage in the context of software engineering.


1. Introduction
   In the literature there are a lot of theories and guides available for establishing and
implementing measurement in a software process. However, in practice difficulties have been
observed in implementing measurement in the software development process [1],[2] and
there is also evidence that measurement is not a particularly common routine in software
engineering [3],[4]. These difficulties are partly due to the nature of software engineering as
well as people’s attitude to measurement overall, but there may also be a lack of knowledge
related to measurement practices, measurement objects, and also the metrics themselves.
Very often difficulties arise when trying to focus the measurement and also when trying to
use and exploit the measurement results. In many cases it is unclear what should be measured
and how and also by whom the measurement data obtained should be interpreted [5].
   This paper presents the viewpoints of software organisations – the advantages and
disadvantages - related to the setting up or use of software measurement in the software
engineering process. Based on empirical information obtained through interviews, this paper
discusses the potential issues challenging measurement utilisation in the software process.
The aim of this study is to give empirical experiences of which factors are felt to be important
in measurement implementation. The empirical research material used in this examination
was obtained from a software research project [6] carried out between 2005 and 2007 in
Finland. The aim of the two-year-long research project was to investigate the current status
and experiences of measuring the software engineering process in Finnish software
organisations and to enhance the empirical knowledge on this theme. The overall goal of the
research was to advance and expand understanding and knowledge of measurement usage,
utilisation, and aspects for its development in the software process. In this paper, we focused
on the interviews performed during the research project and the interpretation of the results
from these interviews.

                                              11
Proceedings 6th Software Measurement European Forum, Rome 2009


   This empirical information gives us some hints as to which are the important issues when
designing, establishing, and implementing measurement in the context of software
engineering.
   During the past years, many research projects and empirical studies including experiences
of measurement utilisation in software engineering have been carried out. Research closely
related to this subject in the software measurement field has been performed
[7],[8],[9],[10],[11],[12],[13]. In these studies the measurement practices and experiences in
software organisations was examined empirically. The main outcome of many of the studies
has been practical guidelines for managing measurement implementation and usage in
software engineering work. However, this issue still seems to require further examination.
   The structure of this paper is as follows: Section 2 describes the research context. In
section 3, the scope and method used in the empirical study are described. This is followed
by section 4, in which the interview results are presented and, in section 5, a discussion and
evaluation of the key findings of the study are made. Finally, in section 6, a summary of the
paper is presented with a few relevant themes for further research.

2. Research context
   In qualitative research there is a need for a theory or theories against which the captured
data can be considered. Below is a short overview of the purpose of use and utilisation of
measurement in business overall and, in particular, in the context of software engineering. It
is well known that measurement can be utilised in various ways for many different purposes
in business. In general, measurement data provides numerical information, which is often
essential for comparing different alternatives and making a reasoned selection among the
alternatives. McGarry et al. [14] listed typical reasons for carrying out measurement in
business:
    • Without measurement, operations and decisions can be based only on subjective
        estimation.
    • To understand the organisation’s own operations better.
    • To enable evaluation of the organisation’s own operations.
    • To be better able to predict what will happen.
    • To guide processes.
    • To find justification for improving operations.

   From the software engineering perspective, a few fundamentals can be pointed out that are
considered the key benefits when discussing the utilisation of measurement. In practice, the
main purpose of software engineering measurement is to control and monitor the state of the
process and the product quality, and also to provide predictability. Therefore, more software
engineering-related reasons for measurement are: (e.g. [4],[15],[16])
    • To control the processes of software production.
    • To provide a visible means for the management to monitor their performance level.
    • To help managers and developers to monitor the effects of activities and changes in
       all aspects of development.
    • To allow the definition of a baseline for understanding the nature and impact of
       proposed changes.
    • To indicate the quality of the software product (to highlight quality problems).
    • To provide feedback to drive improvement efforts.




                                             12
Proceedings 6th Software Measurement European Forum, Rome 2009


   There is no doubt that measurement is widely recognised as an essential part of
understanding, predicting and evaluating software development and maintenance projects
[17],[18],[19]. However, in the context of software engineering, its utilisation is accepted as a
challenging issue [1],[2],[4],[19]. Although information relating to the software engineering
process is available for measurement during software development work and, in addition,
there are plenty of potential measurement objects and metrics related to this process, the
difficulties related to measurement utilisation in this context are considerable. Due to the
factors described before, it must be assumed that the basic issue with measurement utilisation
is not the lack of measurement data, so in all likelihood there must be some other reasons for
this phenomenon. The following section presents one example of how this issue has been
studied empirically and also the type of answers that were obtained to explain this
phenomenon.

3. Empirical study – Software Measurement project
   This study is based on the results of the research project that was carried out between 2005
and 2007 in Finland. The starting point of the SoMe (Software Measurement) project [6] was
to approach software quality from the software process point of view. The approach selected
for examining the recognised issues in relation to software quality was an empirical study, in
which the target was to examine measurement practices related to the software process in
Finnish software companies. The aim and scope of the project was to examine the current
metrics and measurement practices for measuring software processes and product quality in
software organisations. The findings presented in this paper are based on the information
captured with personal interviews and a questionnaire format during the research project.

3.1. Scope of the research project
   There were several limitations related to the research population and unit when
implementing the study. Firstly, the sample was not a random set of Finnish software
companies. The SoMe project was initiated by the Finnish Software Measurement
Association (FiSMA) [20] and two Finnish universities, the Tampere University of
Technology (TUT) [21], and the University of Joensuu (UJ) [22]. In this study the population
was limited so that it covers software companies who are members of FiSMA (which consists
of nearly 40 Finnish software companies, among its members are many of the most notable
software companies in Finland). Secondly, inside the selected population (FiSMA members)
too, the sample was not randomly selected. Instead, purposive sampling was used in this
study, which means that the choice of case companies was based on the rationale that new
knowledge related to the research phenomenon would be obtained (e.g.[23],[24]). This
definition was based on the objective of the SoMe research project, which was primarily to
generate a measurement knowledge base consisting of a large metrics database including
empirical metrics data [6]. In this case we approached companies that had previous
experience with software measurement. The most potential companies who perform software
engineering measurement activities were carefully mapped in close co-operation with
FiSMA. All the participant companies volunteered and the number of the set was limited to
ten companies, based on the selected research method (qualitative research), project schedule
and research resources. The third limitation relates to the research unit. The target group
selected within the company was made up of persons who had experience, knowledge, and
understanding of measurement and those responsible for measurement operations and the
metrics used in practice. From the research point of view, the research unit was precisely
these people in the company.



                                               13
Proceedings 6th Software Measurement European Forum, Rome 2009


  These activities described above generally come under the job description of quality
managers, which was the most common position among the interviewees (see [25]).

3.2. Participants of the research project
   The companies involved in the study were pre-selected by virtue of their history in
software measurement and voluntary participation. The people interviewed were mainly
quality managers, heads of departments and systems analysts, i.e. persons who were
responsible for the measurement activities in the organisation. A total of ten companies
participated in this study. Three of them operate in the finance industry, two in software
engineering, two in ITC services, two in manufacturing and one in automation systems. Six
of the companies operate in Finland (Na) only, the other four also multi-nationally (Mn).
Table 1 below shows some general information related to the participant companies. The
common characteristic shared by these companies is that they all carry out software
development independently and they mainly supply IT projects.

                          Table 1: Description of participating companies.
Company      Business of the              SW                   SW            National or Employees Employees SPICE
                company                Business            Application      Multinational  total      SE      Level
    A     Finance industry, ICT   Customer            Business Systems (BS)      Na         220       220       3
    B     Finance industry, ICT   Customer            Business Systems           Na         280       150       2
    C     Finance industry, ICT   Customer            Business Systems           Na         450        35       2
    D     Finance industry, ICT   Customer            Business Systems           Na         195       195       3
    E     SW engineering, ICT     Customer, Product   BS, Package                Mn       15000      5000       2
    F     SW engineering, ITC     Customer            Business Systems           Na         200        60       2
    G     ITC services            Customer            Business Systems           Na         200       200       3
    H     Manufacturing           Customer, Product   Emb Syst, Package          Mn        1200        30       1
    I     Manufacturing           Product             Emb Syst, Real-Time        Mn       24000       200       2
    J     Automation industry     Product             Real-Time                  Mn        3200       120       3


   Based on the captured information, measurement practices and targets varied slightly
between the companies. Most of them had used measurement with systematic data collection
and results utilisation for about 10 years, but there were also a couple of companies who had
only recently started to adopt more systematic measurement programs. The contact people
inside the companies were mainly quality managers, heads of departments and systems
analysts, i.e. persons who were responsible for the measurement activities in the organisation.

3.3. Research method used
   The nature of the method selected in the SoMe research project was strongly empirical and
the research approach selected was a qualitative one based on personal interviews, combined
with a questionnaire. This method was used in order to collect the experiences of the
companies about the individual metrics used and also opinions and viewpoints about
measurement, based on their own experiences. To be exact, in these interview sessions we
used a semi-structured theme interview base as a framework. We used the same templates in
all interview sessions: one form to collect general information about the company and its
measurement practices, and another spreadsheet-style form to collect all metrics the company
uses or has used (see [25]). The use of the interview method can be justified by the research
findings made by Brown and Duguid [26]. They posit that there is a parallel between
knowledge sharing behaviour and other social interaction situations. Huysman and de Wit
[27] and Swan et al. [28] have also come to similar conclusions in their research. Therefore, it
was decided to use a research method focusing on personal interviews at regular intervals
with participants instead of purely a written questionnaire. It is distinctive of qualitative


                                                           14
Proceedings 6th Software Measurement European Forum, Rome 2009


research that space is often given to the viewpoints and experiences of the interviewees and
their attitudes, feelings, and motives related to the phenomenon in question [29].
   These are difficult, or even impossible to clarify by quantitative research methods. With
the interviews we wanted particularly to examine the individual experiences and feelings
related to the research theme. Using this approach, the aim is not to generalise. Instead, with
face-to-face interviews, our aim is to get the views of the persons about the focal factors and
issues related to measurement implementation. With these interview sessions we tried to
identify the users’ positive and less positive experiences, noted critical issues, and also
improvement and development issues as to the current situation regarding measurement.

4. Interview results
   The following section highlights the interview results from the stance in which we are
particularly interested in this study. This paper presents the respondents’ opinions about the
four particular themes included in our interview base. The captured information taken from
the interviews was carefully analysed by the research team and the results are presented
below in a condensed form. The opinions are divided into four theme categories as follows:

  Positive experiences of measurement usage:
   • Real status determined through measurement.
   • Measurement makes matters transparent.
   • Scheduling and cost adherence become notably better.
   • Resource allocation can be clarified.
   • Key issues emphasised using measurement.
   • Trends visible in time series.
   • Supports decision-making.
   • Motivates employees to work in a better and more disciplined way.
   • Agreed metrics help when planning company goals and operations.
   • Process improvement achieved through measurement.
   • Tool for monitoring effects of SPI actions.
   • Supports quality system implementation.
   • Practical means for integrating a company which has grown through acquisitions and
      creating a coherent organisation culture.
   • Good way of observing trends and benchmarking.
   • Can inspire confidence in new customers by presenting measurement data.

  Negative experiences of measurement usage:
  • Resistance to change at the beginning.
  • People in other departments do not see the benefit of measurement.
  • Measurement often suffers from a negative image (monitoring).
  • Easily viewed as extra work.
  • Mainly viewed negatively if the metrics are linked to an employee bonus system.
  • Measurement may underline and guide certain operations too much.
  • Benefits are quite difficult to see in the short term and also on a personal level.
  • Management sets metrics without in-depth knowledge of the trends.
  • Concrete utilisation of the metrics is difficult.
  • Interpreting the measurement results may be difficult.
  • When trying to use without having necessary measurement skills.
  • Measurement without utilising its results is a waste.

                                              15
Proceedings 6th Software Measurement European Forum, Rome 2009


  Observed issues for improvement and development:
  • Measurement results should be made available to a wider audience - wider
     publication and transparency.
  • Insufficient analysis and utilisation of measurement results.
  • Measurement utilisation in operative management.
  • Developing and implementing the measurement process.
  • Assessing realistic measurement goals is challenging.
  • Determining how to ensure a particular metric is utilised.
  • Extent of measurement (do we measure enough and the right factors?).
  • Finding and defining good metrics.
  • Lack of forward-looking real time metrics (need for real time measurement.
  • Lack of measurement of remaining work effort.
  • Need to improve the measurement related to the area of change and requirement
     management.
  • Evolving the utilisation of limit areas.
  • Delivery of top-quality measurement.
  • Improving testing of data utilisation: should measure exact causes and “originating
     phase” of defects.
  • Increasing the reliability of the collected measurement data.
  • Improving performance measurement.

  Critical issues in utilising and implementing measurement:
  • Demand for management commitment and discipline of the whole organisation.
  • There should be a versatile and balanced set of metrics.
  • Enough time must be reserved to analyse measurement results and also to make the
       necessary rework.
  • Feedback on measurements must be given to all the data collectors, to sustain
       motivation.
  • Benefits of the measurement must be indicated in some way, if not, motivation will
       fade.
  • Capturing measurement information must be integrated in normal daily work.
  • Measurement must be automated as much as possible, so that data capture is not too
       time-consuming.
  • Trends are important, as it takes time to notice process improvement.
  • Should be a large number of metrics, which combine to give accurate status.
  • Avoid placing too much weight and over-emphasis on one single measurement result.
  • Establishing of reliable metrics and their implementation is time-consuming.

   These were the summarised experiences of the themes discussed in this study. As a result
of the information captured on the theme in the interviews, a few key factors have arisen. It
seems that the interviewees rated measurement issues that relate closely to the human aspect
as extremely important factors in the measurement context (e.g. utilisation and publishing,
purpose of use, measurement benefits, feedback, response, etc.). Other issues that were
emphasised relate to the skills and knowledge required for using the metrics and
measurement (e.g. interpreting, analysing and utilising the collected measurement data,
measurement focus, etc.) and also difficulties in metrics selection (amount, balance,
measurement objects, etc.).


                                             16
Proceedings 6th Software Measurement European Forum, Rome 2009


   As is well known, measurement itself is a process that requires sufficient resources and
know-how of software development work and the developed software itself [30],[31]. In the
following sections the observations related to the human aspect are examined more closely.
These interview results are discussed based on the current measurement practices in the
participant organisations and also in relation to previous research results.

5. Discussion
   This section deals more extensively with one of the key factors that arose repeatedly
during the interview sessions – the human aspect. This observation supports the widely
accepted concept that there is a strong emphasis on the human nature of software engineering
work [32],[33],[34],[35],[36]. Next, the interview results are compared and discussed based
on the information that is derived from current measurement practices in the participant
organisations.

5.1. Prevalent measurement usage from the human perspective
   In one part of the SoMe project, detailed information was collected on the current metrics
used in the participant companies. The methods of collecting, analysing and classifying the
research material as well as the entire research process have been described in previous
research papers [6],[37]. Briefly, after the captured measurement data was organised and
evaluated, and the necessary eliminations were made, 85 individual metrics were left which
fulfilled the definition of the research scope. The following evaluation related to results from
the empirical study deals with the measurement users’ perspective. This examination includes
the results that enabled us to see who the beneficiaries of the measurement results were in
practice. For this evaluation, the metrics were classified according to three user groups:
software/project engineers; project managers; and upper management (management above
project level including, but not restricted to, quality, business, department and company
managers). In each group the metrics were further divided into two segments: metrics that are
primarily meant for the members of that group, and those that are secondarily used by that
group after some other group has had access to them (according to the order given on the
questionnaire form, primary or secondary users of the metric may be from more than one user
group). Figure 1 (below) presents the users of the measurement data, based on their position
within the organisational hierarchy.

                           90
                           80
                                                                                17
                           70
       Number of Metrics




                           60
                           50                                                                Secondary
                           40                                                                Primary

                           30                               16                  66

                           20           16
                           10                               22
                                        10
                            0
                                 Software engineers   Project managers    Upper management
                                               Measurement Data User Groups

 Figure 1: Distribution of measurement results and their users in the company organisation.




                                                                     17
Proceedings 6th Software Measurement European Forum, Rome 2009


   With the captured metric data, the users of the measurement data and their position within
the organisational hierarchy can be recognised (see Figure 1). It is clear that the majority of
measurement data produced benefits for upper management. 98% (83/85) of all the data
produced by 85 metrics comes to upper management, 78 % (66/85) primarily. Project
managers are in the middle with 38 metrics, but 58 % (22/38) of them are primary recipients.
Considerably fewer metrics are targeted to the engineering/ development level. Software
engineers benefit from only 26 metrics, and of those only ten are primarily intended for them.
Moreover, according to the study results, roughly 75% (66/85) of the data collecting
activities were carried out primarily by developers or project managers. As Figure 1 clearly
shows, the captured measurement data focuses strongly on use by a certain user group. In this
case, there is obviously an imbalance of who receives the results and how the metrics are
selected for communication in the organisation. Moreover, this phenomenon is confirmed by
previous studies. Metrics designated for upper management are fairly common in software
engineering organisations (e.g. [14],[38],[39]). Next this result and its possible effects are
evaluated from the measurement perspective.

5.2. The human factor – critical for measurement success
   The next section highlights the focal - human-related - issues of which one should be
aware when implementing measurement in the context of software engineering, based on the
empirical results obtained. After analysing and evaluating the captured qualitative data, the
following key factors seem to be significant enablers from the measurement implementation
viewpoint.

Interpreters and analysers of measurement data
   Regarding measurement data analysis, it must be recognised that the data collected has no
intrinsic value per se, but only as the starting point of an analysis enabling the evaluation of
the state of the software process. The final function of the measurement process is precisely
to analyse the collected measurement data and draw the necessary conclusions. According to
Dybå [8], measurement is meaningless without interpretation and judgment by those who
make the decisions and take actions based on them. The human aspect relates closely to this
issue of measurement data processing. According to Florac and Carleton [40], whatever level
is being measured, problems will occur if the analysers of the measurement data are
completely different people from those who collected the measurement data. Van Veenendaal
and McMullan [41], as well as Hall & Fenton [42], share this view and state that when
collecting or analysing measurement data, it is fundamental that the participants are those
who have the best understanding of the measurement focus in question. Being aware of this is
helpful when establishing and planning new software process measurement activities,
especially from the perspective of analysing the measurement results. Based on the results of
the study, we can state that the under-representation of engineer-level beneficiaries and the
uneven distribution of measurement information very probably have detrimental effects: if
not for the projects or software engineering, then at least for the validity and reliability of the
measurements and their results. This result (Figure 1) supports the need that arose during the
interviews for improving the analysis of measurement data.

Measurement data feedback
   Feedback including utilisation of the collected measurement data for all organisation
levels is paramount. As Hall and Fenton [42] identified, the usefulness of measurement data
should be obvious to all practitioners.



                                                18
Proceedings 6th Software Measurement European Forum, Rome 2009


   Therefore, it is important that the measurement results are, of course, where appropriate,
open, and available to everybody in all levels of the organisation. It should be noted that
appropriate measurement supports managerial and engineering processes at all levels of the
organisation [43],[44]. Therefore, it is important that measurement and metric systems are
designed by software developers for learning, rather than by management for controlling. As
well as managers, measurement provides opportunities for developers to participate in
analysing, interpreting, and learning from the results of measurements and to identify
concrete areas for improvement. The most effective use of measurement data for
organisational learning, and also process improvement, is to feed the data back in some form
to the members of the organisation. According to the results of this study, roughly 75%
(66/85) of the data collecting activities were carried out primarily by developers or project
managers. Research would indicate that they expect feedback on how the collected
measurement data is utilised [13]. In the interviews, the feedback issue did not come up. This
may be due to the fact that all those interviewed belonged to upper management, which
benefited from 98% of the collected measurement data. However, there is evidence that this
feedback issue seems to be problematic. The lack of feedback and thus evidence of the
usefulness of collected measurement data has been widely observed (e.g. [13],[42],[45],[46]).
The literature contains many cautionary examples of measurement programs failing because
of insufficient communication, as the personnel is unable to see the relevance or benefit in
collecting data for the metrics [13],[16],[47]. People should be made aware that when
collecting measurement data, it is also essential to organise and produce instructions for
steering the feedback channel, and also to give feedback to everyone or at least to those who
had gathered or supplied the data. Success in giving feedback is indeed recognised as an
important human-related factor in sustaining motivation for measurement [8],[46].

Reaction to the measurement results
   There is yet another important human-related issue to highlight concerning measurement
utilisation: the reaction to the measurement results. This is one of the most crucial issues
from the motivation perspective in relation to measurement data collecting work. One of the
significant factors for success in measurement is the motivation of the employees to carry out
measurement activities. One of the most challenging assignments is to implement
measurement in a manner that has the most positive impact on each of the projects within the
organisation [14]. If people can see that the measurement data they had collected was firstly
analysed and then, if necessary, led to corrective actions, the motivation for colleting the
measurement data will be sustained. However, as in the previous issues, there is also
evidence of a lack of management of this too. According to the results of a large study [7]
made by the Software Engineering Institute at Carnegie Mellon University (software
practitioners, 84 countries, 1895 responses) only 40% of all respondents reported that
corrective action is taken when a measurement threshold has been exceeded and close to 20%
of respondents reported that corrective action is rarely or never taken when a measurement
threshold is exceeded. Furthermore, the studies carried out by Solingen and Berghout [19],
Hall et al. [13] and Mendonca and Basili [48] support this observation. In these interviews
(see section 4.1), the observations that came up closely relate to this issue. These included the
fact that the concrete utilisation of the metrics and also interpretation of the measurement
results are often considered difficult, that measurement without utilising its results is a waste,
and that there is a need for wider publication and transparency of the measurement results,
etc. These are examples of issues that can be linked to a lack of reaction to measurement
results and thus decreased motivation to carry out measurement.



                                               19
Proceedings 6th Software Measurement European Forum, Rome 2009


   The factors presented above tend to be considered essential from the human perspective in
the context of measurement in software engineering. The issues described - the beneficiaries,
analysis and also the appropriate organisation of the collected measurement data - must be
taken into consideration in connection with measurement. Previous studies support these
observations, derived from interpretation of the data captured during the qualitative study.
This confirms the reliability of the study results. In conclusion, it could be stated that paying
attention to these issues may enhance measurement usage in software engineering work.

5.3. Evaluation of the study results
   In qualitative research, such as the interviews in this case, subjective opinion is allowed
and subjectivity is unavoidable [49]. With qualitative material, generalisations cannot be
made in the same way as in quantitative studies. In this way it should be noted that
generalisations can only be made on the grounds of interpretation, not directly on the grounds
of the research material [29]. Therefore, the most challenging part of this approach from the
researcher’s point of view is data interpretation and the subsequent selection of the key
observations and themes that came up in the research material. Finally, it is very much a
question of the researchers’ interpretation of the results that arose that they consider to be the
most significant. If one looks for possible weaknesses or bias in qualitative research
regarding reliability, it could be said to lie in the phase of analysing and interpreting the
research material. However, the results of the previous studies support the observations
presented here.
   On the other hand, because the starting point of the study was the SoMe project, which
was designed to capture experience-based measurement data, it could be said that the sample
does not correspond well to the population from the aspect of software engineering
measurement activity. Therefore it must be assumed that the participant companies were
more measurement-oriented than the average population. This, together with them being
members of FiSMA, may create a bias in results, as these companies are more likely to
perform measurement than the Finnish software industry in general. From the internal
validity point of view this means that the results cannot be generalised to represent normal
practice in the population. However, the results of this study can be applied when examining
the current measurement practices and metrics used in a similar population and a similar
research focus.
   The statistical examination relating to the study result was omitted, because of the
qualitative research method used and also due to the limited amount of research data. A
statistical generalisation of the results cannot be made with the collected data set (e.g. [50]).
As stated by Briand et al. [51], the sampling of too small a dataset leads to the lack of
statistical authority. However, a more detailed validity and reliability examination is given
related to the research carried out and also its results is in the doctoral thesis [25] of which
the research material presented here is the focal data.

6. Conclusion
   This article dealt with the possible reasons for the observation that measuring the software
process in practice is utilised to a rather limited extent in software companies. The empirical
data used in this study was based on a software research project on the Finnish software
industry’s measurement practices. With the help of empirical information obtained through
interview sessions, this paper presented the viewpoints on measurement issues of a set of
Finnish software companies and their experiences. Based on the study results, this paper
mapped out and discussed the potential reasons for the challenges facing measurement
utilisation in the software process.

                                               20
Proceedings 6th Software Measurement European Forum, Rome 2009


   This paper focused on the human factor, which was strongly emphasised in the interviews.
Based on the empirical results obtained, a set of human-related issues arose which one should
be aware of and also take into account when implementing measurement in the context of
software engineering. The results showed that the users felt these issues to be the most crucial
when implementing measurement. The observations made in this study pointed out where
measurement needs to be focused and improved. The interview results also exposed some
points that should be improved in current measurement practices. These results can be used
to evaluate and focus on the factors that must be noted when trying to determine the reasons
why measurement is only utilised to a limited extent in software production processes in
practice.
   In conclusion, the empirical information presented in this paper can help when
establishing or improving measurement practices in software organisations. The information
provided by the empirical study enables us to recognise where the critical issues lie in
measurement implementation. This may help us to take a short step on the road toward
extended utilisation of measurement in software engineering. The results also showed some
of the potential directions for further research on this theme. One interesting issue for future
research would be to examine what really happens after the measurement data is analysed
and how the response process is organised in practice in these organisations.

7. References
[1]    Bititci, U. S., Martinez, V., Albores, P. and Parung, J., “Creating and managing value in collaborative
       networks”, International Journal of Physical Distribution and Logistics Management, vol. 40, no. 3,
       2004, pp. 251-268.
[2]    Testa, M. R., “A Model for organisation-based 360 degree leadership assessment”, Leadership and
       Organisation Development Journal, vol. 23, no. 5, 2002, pp. 260-268.
[3]    Ebert, C., Dumke, R., Bundschuh, M. and Schmietendorf, A., “Best Practices in Software Measurement:
       How to Use Metrics to Improve Project and Process Performance”, Springer, 2004.
[4]    Fenton, N. and Pfleeger, S., “Software Metrics: A Rigorous & Practical Approach - 2nd edition”,
       International Thompson Computer Press, 1997.
[5]    Kulik, P. (2000), “Software Metrics State of the Art 2000”, KLCI Inc., 2006, http://www.klci.com.
[6]    Soini, J., “Approach to Knowledge Transfer in Software Measurement“, in International Journal of
       Computing and Informatics: Informatica, vol. 31, no. 4, Dec 2007, pp. 437-446.
[7]    Kasunic, M., “The State of Software Measurement Practice: Results of 2006 Survey”, Technical Report
       CMU/SEI-2006-TR-009, ESC-TR-2006-009, Software Engineering Institute (SEI), 2006.
[8]    Dybå, T., “An Empirical Investigation of the Key Factors for Success in Software Process
       Improvement”, IEEE Transactions on Software Engineering, vol. 31, no. 5, 2005, pp. 410-424.
[9]    Card, D. N. and Jones, C. L., “Status Report: Practical Software Measurement”, in the 3rd International
       Conference on Quality Software (QSIC’03) proceedings, pp. 315-320, 2003.
[10]   Baddoo, N. and Hall, T., “De-motivators for software process improvement: an analysis of practitioners’
       views”, The Journal of Systems and Software, vol. 66, no. 1, 2003, pp. 23-33.
[11]   Rainer, A. and Hall, T., “A quantitative and qualitative analysis of factors affecting software processes”,
       Journal of Systems and Software, vol. 66, no. 1, 2003, pp. 7-21.
[12]   Dybå, T., “An Instrument for Measuring the Key Factors of Success in Software Process Improvement”,
       Empirical Software Engineering, vol. 5, no. 4, 2000, pp. 357-390.
[13]   Hall, T., Baddoo, N. and Wilson, D. N., “Measurement in Software Process Improvement Programmes:
       An Empirical Study”, in the 10th International Workshop on Software Measurement (IWSM 2000)
       proceedings, pp. 73-83, 2000.
[14]   McGarry, J., Card, D., Jones, C., Layman, B., Clark, E., Dean, J. and Hall, F., “Practical Software
       Measurement. Objective Information for Decision Makers”, Addison-Wesley, 2002.
[15]   Zahran, S., “Software Process Improvement: Practical Guidelines for Business Success”, Addison-
       Wesley, 1997.
[16]   Briand, L. C., Differding, C. M. and Rombach, H. D., “Practical Guidelines for Measurement-Based
       Process Improvement”, in Software Process – Improvement and Practice, vol. 2, no. 4, 1996, pp. 253-
       280.

                                                       21
Proceedings 6th Software Measurement European Forum, Rome 2009

[17] Card, D. N., “Integrating Practical Software Measurement and the Balanced Scorecard”, in the 27th
       annual international computer software and applications conference (COMPSAC’03) proceedings, pp.
       362-367, 2003.
[18]   Dybå, T., “Factors of Software Process Improvement Success in Small and Large Organisations: An
       Empirical Study in the Scandinavian Context”, in the 9th European Software Engineering Conference
       (ESEC´03) proceedings, pp. 148-157, 2003.
[19]   van Solingen, R. and Berghout, E., “The Goal/Question/Metric Method: A Practical Guide for Quality
       Improvement of Software Development”, McGraw-Hill, 1999.
[20]   FiSMA, Finnish Software Measurement Association, 2008, http://www.fisma.fi/eng/index.htm.
[21]   TUT, Tampere University of Technology, 2008, http://www.tut.fi/
[22]   UJ, University of Joensuu, 2008, http://www.joensuu.fi/englishindex.html.
[23]   Kitchenham, B., Pfleeger, S. L., Pickard, L., Jones, P., Hoaglin, D., El Emam, K. and Rosenberg, J.,
       “Preliminary Guidelines for Empirical Research in Software Engineering”, IEEE Transactions on
       Software Engineering, vol. 28, no. 8, 2002, pp. 721-733.
[24]   Curtis, S., Gesler, W., Smith, G. and Washburn, S., “Approaches to sampling and case selection in
       qualitative research: examples in geography of health”, Social Science & Medicine, vol. 50, issue 7-8,
       2000, pp. 1001-1014.
[25]   Soini, J., “Measurement in the software process: How practice meets theory”, Doctoral thesis, ISBN
       978-952-15-2060-0, Tampereen teknillinen yliopisto 2008, Publication 767, 2008.
[26]   Brown, J. S. and Duguid, P., “Balancing Act: How to capture Knowledge without Killing It”, Harvard
       Business Review, vol.78, no. 3, 2000, pp. 73-80.
[27]   Huysman M. and de Wit, D., “Knowledge sharing in practice”, Kluwer Academic Publishers, 2002.
[28]   Swan J., Newell, S., Scarborough, H. and Hislop, D., “Knowledge management and innovation:
       networks and networking”, Journal of Knowledge Management, vol. 3, no. 4, 1999, pp. 262-275.
[29]   Räsänen, P., Anttila, A. H. and Melin H., ”Tutkimusmenetelmien pyörteissä”, WS Bookwell Oy, 2005.
[30]   Schneidewind, N., “Knowledge requirements for software quality measurement”, Empirical Software
       Engineering, vol. 6, no. 3, 2001, pp. 201-205.
[31]   Baumert, J. and McWhinney, M., “Software Measures and the Capability Maturity Model, Software
       Engineering Institute Technical Report, CMU/SEI-92-TR-25, ESC-TR-92-0, 1992.
[32]   Ravichandran, T. and Rai, A., “Structural Analysis of the Impact of Knowledge Creation and
       Knowledge Embedding on Software Process Capability”, IEEE Transactions on Engineering
       Management, vol. 50, no. 3, 2003, pp. 270-284.
[33]   Nowak, M. J. and Grantham, C. E., “The virtual incubator: managing human capital in the software
       industry, Research Policy, vol 29, no. 2, 2000, pp. 125-134.
[34]   O’Regan, P., O’Donnell, D., Kennedy, T., Bontis, N. and Cleary, P., “Perceptions of intellectual capital:
       Irish evidence”, Journal of Human Resource Costing and Accounting, vol. 6, no. 2, 2001, pp. 29-38.
[35]   Drucker, P., “Knowledge-Worker Productivity. The Biggest Challenge”, California Management
       Review, vol. 41, no. 2, 1999, pp. 79-94.
[36]   Ould, M. A., “Software Quality Improvement Through Process Assessment – A View from the UK”,
       IEEE Colloquium on Software Quality, pp. 1-8, 1992.
[37]   Soini, J., Tenhunen, V. and Mäkinen, T., “Managing and Processing Knowledge Transfer between
       Software Organisations: A Case Study”, in the International Conference on Management of Engineering
       and Technology (PICMET’07) proceedings, pp. 1108-1113, 2007.
[38]   Kald, M. and Nilsson, F., “Performance Measurement at Nordic Companies”, European Management
       Journal, vol. 18, no.1, 2000, pp. 113-127.
[39]   Gibbs, W. W., “Software’s Chronic Crisis”, Scientific American, Sept 94, vol. 271, no. 3, 1994, pp. 86-
       95.
[40]   Florac, W. A. and Carleton, A. D., “Measuring the Software Process: Statistical Process Control for
       Software Process Improvement”, Addison-Wesley Longman, Inc., 1999.
[41]   van Veenendaal E. and McMullan, J., “Achieving Software Product Quality”, UTN Publishers, 1997.
[42]   Hall, T. and Fenton, N., “Implementing Effective Software Metrics Programs”, IEEE Software, vol. 14,
       no. 2, 1997, pp. 55-65.
[43]   Simmons, D. B., “A Win-Win Metric Based Software Management Approach”, IEEE Transactions on
       Engineering Management, vol. 39, no. 1, 1992, pp. 32-41.
[44]   Andersen, O., “The Use of Software Engineering Data in Support of Project Management”, Software
       Engineering Journal, vol. 5, issue 6, 1990, pp. 350-356.



                                                       22
Proceedings 6th Software Measurement European Forum, Rome 2009

[45] Varkoi, T. and Mäkinen, T., “Software Process Improvement Network in the Satakunta Region –
       SataSPIN”, in the European Software Process Improvement Conference (EuroSPI’99) proceedings,
       1999.
[46]   Grady, R. B. and Caswell, D. L., “Software Metrics: Establishing a Company-Wide Program”. Prentice-
       Hall, 1987.
[47]   Kan, S. H., “Metrics and Models in Software Quality Engineering”, 2nd Edition, Addison-Wesley, 2005.
[48]   Mendonca, M. G. and Basili, V. R., “Validation of an Approach for Improving Existing Measurement
       Frameworks”, IEEE Transactions on Software Engineering, vol. 26, no. 6, 2000, pp. 484-499.
[49]   Eskola, J. and Suoranta, J. ”Johdatus laadulliseen tutkimukseen”. 7. painos. Vastapaino, Tampere, 2005.
[50]   Lukka, K., Kasanen, E., “The problem of generalizability: anecdotes and evidence in accounting
       research”, Accounting, Auditing and Accountability Journal, vol. 8, no. 5, 1995, pp. 71-90.
[51]   Briand, L., El Enam, K. and Morasca, S., “Theoretical and Empirical Validation of Software Product
       Measures”, ISERN Technical report 95-03, 1995.




                                                     23
Proceedings 6th Software Measurement European Forum, Rome 2009

SMEF 2009




                                        24
Proceedings 6th Software Measurement European Forum, Rome 2009


         IFPUG function points or COSMIC function points?
                                      Gianfranco Lanza


Abstract
   The choice of the more suitable metric to obtain the functional measure of a software
application is not an easy task as it could appear. Nowadays a software application is often
composed by many modules, each of them with its own characteristics, its own programming
languages and so on.
   In some cases one metric could appear more suitable to measure one particular piece of
software than another, so in the process of measuring a whole application it’s possible to
apply different metrics: obviously what we can’t do is to add one measure to the other (we
can’t add inch with cm!).
   In our organisation we are starting to use COSMIC function points to measure part of the
application and to use IFPUG function points to measure other parts.
   One of the possible problems is: “when the management ask us only one measure of the
functional dimension of the application what can we do?” It’s possible to obtain only one
measure using the conversion factors from COSMIC to IFPUG but it would be better to
maintain two distinct measures.
   We are starting to compare the two metrics in different environments: in some cases there
is a significant difference in the functional measure. Using a metric is mainly a matter of
culture. It’s important to know what a metric can give us and what not, but above all, what is
our goal. If, for example, our goal is to obtain a prevision of effort and costs we have to know
the productivity of each metric in the different environments: so we have to collect data!
   This paper illustrates how we can use both the metrics in our application.


1. Introduction
   Every day we have to do with metrics to satisfy our needs. For example how many times a
day do we look at our watch to know what time is it?
   We choose the right metric to satisfy our goal. If the goal is to know what time is it, we
use the metric “time”. If the goal is to weigh some fruits we use the metric that give us the
weigh.
   Sometimes the metrics give us measures that represent the same concept but in different
measure unit (i.e. inch and cm for length). The metrics were born to measure and we have to
measure to know. Every metric gives us an information, the functional metrics give us
information about the dimension of our software products. In every moment of our life we
have to use many metrics; in the world of the applications sizing is it sufficient to use one
single metric? Can one single metric be sufficient to answer all our questions, all our needs?
Today, the architecture of our software applications is complex and full of software
components, each one with its own characteristics, programming languages and so on. It’s
absolutely normal that we can use different metrics according to the nature of the software
component that we have to measure.

2. User’s point of view
   The dimensional measure of a software application is influenced by the choice of the user;
the user’s point of view is very important to determine our measure. The choice of the user
depends also by the goal that we have.

                                              25
Proceedings 6th Software Measurement European Forum, Rome 2009


   If we have to measure our software application because our client wants to know the
functional dimension of the software he’ll buy, the user is our client. If we have to measure
the software application to have information about the effort to develop the application, the
user could be different. In a client server architecture we should like to know the functional
dimension of the client component and the functional dimension of the server component, as
they were two different applications, we could have two different borders, simply because we
could have different data about productivity for the client component and for the server
component. If our application is built with business services, we could have to measure
separately these business services for knowing the effort of developing them, if some of these
business services are already done, we would like to know how much reuse we will have. In
an application we can have different user’s point of view that give us different measures; the
important thing is that we have not to mix them!

3. Elementary Process
   To determine the functional dimension of an application is necessary to identify the
elementary processes, independently by the metric we use. The elementary process is the
smallest unit of activity, which satisfies all of the following:
    • Is meaningful to the user.
    • Constitutes a complete transaction.
    • Is self contained.
    • Leaves the business of the application being counted in a consistent state.[1]

   The elementary process depends by the user’s point of view and as at the end of the
process the system has to be in a consistent state, it can’t be divided into sub processes. In an
application, if we change the user’s point of view, we can have different elementary
processes.

4. Batch Processes
   In some cases a batch process is a stream of processes, each of them linked together to
constitute a unique elementary process, whichever user’s point of view we have. If something
of wrong happens during the execution of one process, the whole batch process has to be
restored. In the figure 1 is represented the schema of this type of batch.

      Start                                                                       End

        Process 1            Process 2                                Process n


                                   Figure 1: Batch process.

4.1. Batch processes using IFPUG function points
   If we use the IFPUG function points, we have to consider one unique elementary process
for one batch process (in fact we can’t divide the elementary process in different elementary
processes because the system wouldn’t be in a consistent state at the end of each process of
the stream; this independently by the user’s point of view).
   Then we have to discover the primary intent of this process, if it is to maintain some
logical files or change the system behaviour, or if it is to expose data outside the boundary of
our application. In the first case we have an EI, otherwise an EO or EQ.

                                               26
Proceedings 6th Software Measurement European Forum, Rome 2009


   To establish the complexity of our IFPUG functions we have to identify the FTR (File
Type Referenced) and the DET (Data Element Type); we could have from 3 to 7 IFPUG
function points, according to the complexity and the type of the transactional function.
   We have to add this function points to the number of the data functions (ILF or EIF)
function points but very often these batch processes are inside the boundary of a Client-
Server or WEB application, the data functions that they reference are the same counted in the
Client-Server or Web application; so being inside the same boundary we can’t count twice
our data functions!
   We have also to observe that in an application of great dimension we can have a great
number (more than 50) of these batch processes. At the end of the process, in any case, we
have a number of function points for each of this process that are very similar, probably with
a difference of three, four function points. Is this measure useful to us? Is it sufficient
significant to apply, for example, a model to obtain the cost and the effort of the batch
processes developing?
   Probably the answer is not!

4.2. Batch processes using COSMIC function points
   If we use COSMIC function points we have also to identify the elementary process by the
user’s point of view. In this case the elementary process is the same of that identified in the
IFPUG function points process. In COSMIC function points there aren’t data functions but
there are objects of interest and data movements, each data movement has one object of
interest.[2] There are four type of data movements: Entry, Exit, Read and Write. In a bath
process with a stream of processes linked together to constitute a single elementary process
there are a multitude of data movements. In the figure 2 is illustrated an example. The weigh
of each data movement is one single function point. There is no limit to the numbers of data
movements in a elementary process (at least one entry and one of the others), so we can have
functional processes with a strong difference in function points, according to the numbers of
data movements. Is this measure useful to us? Is it sufficient significant to apply, for
example, a model to obtain the cost and the effort of the batch process developing?
   Probably the answer is yes!


             Entry 1                                         Read 1
          Entry m                                             Read k
                                 Batch process
                                                                     Update1
              Exit 1
                                                                      Update j
               Exit n


                                 Figure 2: Data Movements.

4.3. A real case
   In the Table 1 is represented the result in IFPUG function points and in Table 2 of
COSMIC function points of the functional measure of the batch processes of a business
application. The number of IFPUG function points of each batch process is obtained by the
sum of the relative transactional functions with the portion of the Data functions weigh (the
entire Data Function weigh is of 228 FP / 32 Batch Processes = 7,125 FP).

                                              27
Proceedings 6th Software Measurement European Forum, Rome 2009


   The number of COSMIC Function Point is obtained by the number of data movements of
each batch process.

                  Table 1: The IFPUG function points counting result.
                   Name                            Function     FP
                   Print and Update BATCH
                   CTTRIS000-002                   EOH          7
                   CTTRIS000-003                   EOH          7
                   CTTRIS000-008                   EOH          7
                   CTTRIS500-001                   EOH          7
                   CTTRIS500-003                   EOA          5
                   CTTRIS600-001                   EOH          7
                   CTTRTP000-002                   EOH          7
                   CTTRTP000-003                   EOH          7
                   CTTRTP000-005                   EOH          7
                   CTTRTP000-008                   EOH          7
                   CTTRTP000-009                   EOH          7
                   CTTRTP000-010                   EOH          7
                   CTTRTP000-011                   EOH          7
                   CTTRTP000-012                   EOH          7
                   CTTRTP000-013                   EOH          7
                   CTTRTP000-014                   EOH          7
                   CTTRTR500-001                   EOH          7
                   CTTRTR600-001                   EOH          7
                   CTTRTR600-002                   EOH          7
                   CTTRTR600-004                   EOH          7
                   CTTRTR800-001                   EOH          7
                   CTTRTR800-002                   EOH          7
                   CTTRTS000-001                   EOH          7
                   CTTRTS700-001                   EOH          7
                   CTTRTS700-002                   EOH          7
                   CTTRTS700-008                   EOH          7
                   CTTRTU000-001                   EOH          7
                   CTTRTU000-002                   EOH          7
                   CTTRTU000-003                   EOH          7
                   CTTRTU000-008                   EOH          7
                   CTTRTS700-001                   EOH          7
                   CTTRTS700-002                   EOH          7
                   Total =                                      222




                                          28
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009
Smef2009

More Related Content

What's hot

An Innovative Research on Software Development Life Cycle Model
An Innovative Research on Software Development Life Cycle ModelAn Innovative Research on Software Development Life Cycle Model
An Innovative Research on Software Development Life Cycle Model
Sucharita Bansal
 
Project risk management
Project risk managementProject risk management
Project risk management
Haseeb Khan
 
A PROCESS QUALITY IMPROVEMENT MECHANISM FOR REDUCING THE RISK OF CI ENVIRONMENT
A PROCESS QUALITY IMPROVEMENT MECHANISM FOR REDUCING THE RISK OF CI ENVIRONMENTA PROCESS QUALITY IMPROVEMENT MECHANISM FOR REDUCING THE RISK OF CI ENVIRONMENT
A PROCESS QUALITY IMPROVEMENT MECHANISM FOR REDUCING THE RISK OF CI ENVIRONMENT
ijcsit
 
Managing Large-scale Multimedia Development Projects
Managing Large-scale Multimedia Development ProjectsManaging Large-scale Multimedia Development Projects
Managing Large-scale Multimedia Development Projects
Simon Price
 
Eclipse Florence Day: Modeling in the Italian Industry
Eclipse Florence Day: Modeling in the Italian IndustryEclipse Florence Day: Modeling in the Italian Industry
Eclipse Florence Day: Modeling in the Italian Industry
Federico Tomassetti
 
Open Engineering Framework
Open Engineering FrameworkOpen Engineering Framework
Open Engineering Framework
John Vogel
 
BA_Thesis_Mitchel_Böhi
BA_Thesis_Mitchel_BöhiBA_Thesis_Mitchel_Böhi
BA_Thesis_Mitchel_Böhi
Mitchel Bohi
 

What's hot (17)

An Innovative Research on Software Development Life Cycle Model
An Innovative Research on Software Development Life Cycle ModelAn Innovative Research on Software Development Life Cycle Model
An Innovative Research on Software Development Life Cycle Model
 
IRJET- Factors in Selection of Construction Project Management Software i...
IRJET-  	  Factors in Selection of Construction Project Management Software i...IRJET-  	  Factors in Selection of Construction Project Management Software i...
IRJET- Factors in Selection of Construction Project Management Software i...
 
Final_version_SAI_ST_projectenboekje_2015
Final_version_SAI_ST_projectenboekje_2015Final_version_SAI_ST_projectenboekje_2015
Final_version_SAI_ST_projectenboekje_2015
 
The (Un) Expected Impact of Tools in Software Evolution
The (Un) Expected Impact of Tools in Software EvolutionThe (Un) Expected Impact of Tools in Software Evolution
The (Un) Expected Impact of Tools in Software Evolution
 
Making Effective, Useful Software Development Tools
Making Effective, Useful Software Development ToolsMaking Effective, Useful Software Development Tools
Making Effective, Useful Software Development Tools
 
Maturity modle proposal v1 networked business quickversion
Maturity modle proposal v1   networked business quickversionMaturity modle proposal v1   networked business quickversion
Maturity modle proposal v1 networked business quickversion
 
DevOps shifting software engineering strategy Value based perspective
DevOps shifting software engineering strategy Value based perspectiveDevOps shifting software engineering strategy Value based perspective
DevOps shifting software engineering strategy Value based perspective
 
Project risk management
Project risk managementProject risk management
Project risk management
 
Mobile Application Development Methodologies Adopted in Omani Market: A Compa...
Mobile Application Development Methodologies Adopted in Omani Market: A Compa...Mobile Application Development Methodologies Adopted in Omani Market: A Compa...
Mobile Application Development Methodologies Adopted in Omani Market: A Compa...
 
A PROCESS QUALITY IMPROVEMENT MECHANISM FOR REDUCING THE RISK OF CI ENVIRONMENT
A PROCESS QUALITY IMPROVEMENT MECHANISM FOR REDUCING THE RISK OF CI ENVIRONMENTA PROCESS QUALITY IMPROVEMENT MECHANISM FOR REDUCING THE RISK OF CI ENVIRONMENT
A PROCESS QUALITY IMPROVEMENT MECHANISM FOR REDUCING THE RISK OF CI ENVIRONMENT
 
Managing Large-scale Multimedia Development Projects
Managing Large-scale Multimedia Development ProjectsManaging Large-scale Multimedia Development Projects
Managing Large-scale Multimedia Development Projects
 
MOBILE APPLICATION DEVELOPMENT METHODOLOGIES ADOPTED IN OMANI MARKET: A COMPA...
MOBILE APPLICATION DEVELOPMENT METHODOLOGIES ADOPTED IN OMANI MARKET: A COMPA...MOBILE APPLICATION DEVELOPMENT METHODOLOGIES ADOPTED IN OMANI MARKET: A COMPA...
MOBILE APPLICATION DEVELOPMENT METHODOLOGIES ADOPTED IN OMANI MARKET: A COMPA...
 
Eclipse Florence Day: Modeling in the Italian Industry
Eclipse Florence Day: Modeling in the Italian IndustryEclipse Florence Day: Modeling in the Italian Industry
Eclipse Florence Day: Modeling in the Italian Industry
 
Beyond DevOps: Finding Value through Requirements
Beyond DevOps: Finding Value through RequirementsBeyond DevOps: Finding Value through Requirements
Beyond DevOps: Finding Value through Requirements
 
Exploring the Efficiency of the Program using OOAD Metrics
Exploring the Efficiency of the Program using OOAD MetricsExploring the Efficiency of the Program using OOAD Metrics
Exploring the Efficiency of the Program using OOAD Metrics
 
Open Engineering Framework
Open Engineering FrameworkOpen Engineering Framework
Open Engineering Framework
 
BA_Thesis_Mitchel_Böhi
BA_Thesis_Mitchel_BöhiBA_Thesis_Mitchel_Böhi
BA_Thesis_Mitchel_Böhi
 

Similar to Smef2009

Using-Measurement-Current-Standards-and-Guidance_paper
Using-Measurement-Current-Standards-and-Guidance_paperUsing-Measurement-Current-Standards-and-Guidance_paper
Using-Measurement-Current-Standards-and-Guidance_paper
pbaxter
 
David vernon software_engineering_notes
David vernon software_engineering_notesDavid vernon software_engineering_notes
David vernon software_engineering_notes
mitthudwivedi
 
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVEDEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
International Journal of Technical Research & Application
 
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVEDEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
International Journal of Technical Research & Application
 
Issues in software cost estimation
Issues in software cost estimationIssues in software cost estimation
Issues in software cost estimation
Kashif Aleem
 
Software metric analysis methods for product development maintenance projects
Software metric analysis methods for product development  maintenance projectsSoftware metric analysis methods for product development  maintenance projects
Software metric analysis methods for product development maintenance projects
IAEME Publication
 
Software metric analysis methods for product development
Software metric analysis methods for product developmentSoftware metric analysis methods for product development
Software metric analysis methods for product development
iaemedu
 

Similar to Smef2009 (20)

Using-Measurement-Current-Standards-and-Guidance_paper
Using-Measurement-Current-Standards-and-Guidance_paperUsing-Measurement-Current-Standards-and-Guidance_paper
Using-Measurement-Current-Standards-and-Guidance_paper
 
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
 
Evolvea Frameworkfor SelectingPrime Software DevelopmentProcess
Evolvea Frameworkfor SelectingPrime Software DevelopmentProcessEvolvea Frameworkfor SelectingPrime Software DevelopmentProcess
Evolvea Frameworkfor SelectingPrime Software DevelopmentProcess
 
An Approach of Improve Efficiencies through DevOps Adoption
An Approach of Improve Efficiencies through DevOps AdoptionAn Approach of Improve Efficiencies through DevOps Adoption
An Approach of Improve Efficiencies through DevOps Adoption
 
20120140506013
2012014050601320120140506013
20120140506013
 
David vernon software_engineering_notes
David vernon software_engineering_notesDavid vernon software_engineering_notes
David vernon software_engineering_notes
 
Rcose challenges and benefits from using software analytics in softeam
Rcose  challenges and benefits from using software analytics in softeamRcose  challenges and benefits from using software analytics in softeam
Rcose challenges and benefits from using software analytics in softeam
 
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVEDEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
 
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVEDEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
DEVELOPMENT OF A SOFTWARE MAINTENANCE COST ESTIMATION MODEL: 4 TH GL PERSPECTIVE
 
Insights on Research Techniques towards Cost Estimation in Software Design
Insights on Research Techniques towards Cost Estimation in Software Design Insights on Research Techniques towards Cost Estimation in Software Design
Insights on Research Techniques towards Cost Estimation in Software Design
 
A Comparative Analysis Of Various Methodologies Of Agile Project Management V...
A Comparative Analysis Of Various Methodologies Of Agile Project Management V...A Comparative Analysis Of Various Methodologies Of Agile Project Management V...
A Comparative Analysis Of Various Methodologies Of Agile Project Management V...
 
Software development
Software developmentSoftware development
Software development
 
Issues in software cost estimation
Issues in software cost estimationIssues in software cost estimation
Issues in software cost estimation
 
ITERATIVE AND INCREMENTAL DEVELOPMENT ANALYSIS STUDY OF VOCATIONAL CAREER INF...
ITERATIVE AND INCREMENTAL DEVELOPMENT ANALYSIS STUDY OF VOCATIONAL CAREER INF...ITERATIVE AND INCREMENTAL DEVELOPMENT ANALYSIS STUDY OF VOCATIONAL CAREER INF...
ITERATIVE AND INCREMENTAL DEVELOPMENT ANALYSIS STUDY OF VOCATIONAL CAREER INF...
 
A survey of predicting software reliability using machine learning methods
A survey of predicting software reliability using machine learning methodsA survey of predicting software reliability using machine learning methods
A survey of predicting software reliability using machine learning methods
 
Lecture-1,2-Introduction to SE.pptx
Lecture-1,2-Introduction to SE.pptxLecture-1,2-Introduction to SE.pptx
Lecture-1,2-Introduction to SE.pptx
 
ProductCamp Amsterdam Session 9 Rudy Katchow
ProductCamp Amsterdam Session 9 Rudy KatchowProductCamp Amsterdam Session 9 Rudy Katchow
ProductCamp Amsterdam Session 9 Rudy Katchow
 
Empirical Study of the Evolution of Agile-developed Software System in Jordan...
Empirical Study of the Evolution of Agile-developed Software System in Jordan...Empirical Study of the Evolution of Agile-developed Software System in Jordan...
Empirical Study of the Evolution of Agile-developed Software System in Jordan...
 
Software metric analysis methods for product development maintenance projects
Software metric analysis methods for product development  maintenance projectsSoftware metric analysis methods for product development  maintenance projects
Software metric analysis methods for product development maintenance projects
 
Software metric analysis methods for product development
Software metric analysis methods for product developmentSoftware metric analysis methods for product development
Software metric analysis methods for product development
 

More from Harold van Heeringen

Productivity measurement of agile teams (IWSM 2015)
Productivity measurement of agile teams (IWSM 2015)Productivity measurement of agile teams (IWSM 2015)
Productivity measurement of agile teams (IWSM 2015)
Harold van Heeringen
 
Productivity measurement of agile teams (IWSM 2015)
Productivity measurement of agile teams (IWSM 2015)Productivity measurement of agile teams (IWSM 2015)
Productivity measurement of agile teams (IWSM 2015)
Harold van Heeringen
 
Gastcollege Hanzehogeschool Groningen 10 januari 2014
Gastcollege Hanzehogeschool Groningen 10 januari 2014Gastcollege Hanzehogeschool Groningen 10 januari 2014
Gastcollege Hanzehogeschool Groningen 10 januari 2014
Harold van Heeringen
 

More from Harold van Heeringen (20)

Improve Estimation maturity using Functional Size Measurement and Historical ...
Improve Estimation maturity using Functional Size Measurement and Historical ...Improve Estimation maturity using Functional Size Measurement and Historical ...
Improve Estimation maturity using Functional Size Measurement and Historical ...
 
Productivity measurement of agile teams (IWSM 2015)
Productivity measurement of agile teams (IWSM 2015)Productivity measurement of agile teams (IWSM 2015)
Productivity measurement of agile teams (IWSM 2015)
 
Productivity measurement of agile teams (IWSM 2015)
Productivity measurement of agile teams (IWSM 2015)Productivity measurement of agile teams (IWSM 2015)
Productivity measurement of agile teams (IWSM 2015)
 
Methodisch begroten van projecten hanzehogeschool groningen december2014
Methodisch begroten van projecten   hanzehogeschool groningen december2014Methodisch begroten van projecten   hanzehogeschool groningen december2014
Methodisch begroten van projecten hanzehogeschool groningen december2014
 
The importance of benchmarking software projects - Van Heeringen and Ogilvie
The importance of benchmarking software projects - Van Heeringen and OgilvieThe importance of benchmarking software projects - Van Heeringen and Ogilvie
The importance of benchmarking software projects - Van Heeringen and Ogilvie
 
Van Heeringen and van Gorp - Measure the functional size of a mobile app usi...
Van Heeringen and van Gorp  - Measure the functional size of a mobile app usi...Van Heeringen and van Gorp  - Measure the functional size of a mobile app usi...
Van Heeringen and van Gorp - Measure the functional size of a mobile app usi...
 
Measuring the functional size of mobile apps with COSMIC FP
Measuring the functional size of mobile apps with COSMIC FPMeasuring the functional size of mobile apps with COSMIC FP
Measuring the functional size of mobile apps with COSMIC FP
 
Avoid software project horror stories - check the reality value of the estima...
Avoid software project horror stories - check the reality value of the estima...Avoid software project horror stories - check the reality value of the estima...
Avoid software project horror stories - check the reality value of the estima...
 
ISMA 9 - van Heeringen - Using IFPUG and ISBSG to improve organization success
ISMA 9 - van Heeringen - Using IFPUG and ISBSG to improve organization successISMA 9 - van Heeringen - Using IFPUG and ISBSG to improve organization success
ISMA 9 - van Heeringen - Using IFPUG and ISBSG to improve organization success
 
Gastcollege Hanzehogeschool Groningen 10 januari 2014
Gastcollege Hanzehogeschool Groningen 10 januari 2014Gastcollege Hanzehogeschool Groningen 10 januari 2014
Gastcollege Hanzehogeschool Groningen 10 januari 2014
 
The value of benchmarking software projects
The value of benchmarking software projectsThe value of benchmarking software projects
The value of benchmarking software projects
 
Using the ISBSG data to improve your organization success - van Heeringen (Me...
Using the ISBSG data to improve your organization success - van Heeringen (Me...Using the ISBSG data to improve your organization success - van Heeringen (Me...
Using the ISBSG data to improve your organization success - van Heeringen (Me...
 
Asl bi sl metrics themasessie 2013 devops sogeti
Asl bi sl metrics themasessie 2013   devops sogetiAsl bi sl metrics themasessie 2013   devops sogeti
Asl bi sl metrics themasessie 2013 devops sogeti
 
Begroten van software projecten - Hogeschool Rotterdam gastcollege 05-11-2013
Begroten van software projecten - Hogeschool Rotterdam gastcollege 05-11-2013Begroten van software projecten - Hogeschool Rotterdam gastcollege 05-11-2013
Begroten van software projecten - Hogeschool Rotterdam gastcollege 05-11-2013
 
Van heeringen estimate faster, cheaper, better
Van heeringen   estimate faster, cheaper, betterVan heeringen   estimate faster, cheaper, better
Van heeringen estimate faster, cheaper, better
 
van Heeringen - estimate faster,cheaper and better!
van Heeringen - estimate faster,cheaper and better!van Heeringen - estimate faster,cheaper and better!
van Heeringen - estimate faster,cheaper and better!
 
The value of benchmarking IT projects - H.S. van Heeringen
The value of benchmarking IT projects - H.S. van HeeringenThe value of benchmarking IT projects - H.S. van Heeringen
The value of benchmarking IT projects - H.S. van Heeringen
 
Begroten van agile projecten, technical meeting Sogeti 2013-09
Begroten van agile projecten, technical meeting Sogeti 2013-09Begroten van agile projecten, technical meeting Sogeti 2013-09
Begroten van agile projecten, technical meeting Sogeti 2013-09
 
Sogeti seminar Supplier Performance Measurement
Sogeti seminar Supplier Performance MeasurementSogeti seminar Supplier Performance Measurement
Sogeti seminar Supplier Performance Measurement
 
Software Estimating and Performance Measurement
Software Estimating and Performance MeasurementSoftware Estimating and Performance Measurement
Software Estimating and Performance Measurement
 

Smef2009

  • 1. SOFTWARE MEASUREMENT EUROPEAN FORUM PROCEEDINGS 28 - 29 May 2009 NH LEONARDO da VINCI, ROME (ITALY) EDITOR Ton Dekkers Galorath International Ltd. COVER PHOTO Ton Dekkers
  • 2. Proceedings 6th Software Measurement European Forum, Rome 2009
  • 3. Proceedings 6th Software Measurement European Forum, Rome 2009 CONFERENCE OFFICERS Software Measurement European Forum 2009 Conference Manager Cristina Ferrarotti, Istituto Internasionale di Ricerca Srl., Italy Conference Chairperson Roberto Meli, DPO - Data Processing Organization, Italy Program Committee Chairperson Ton Dekkers, Galorath International Ltd., The Netherlands Program Committee Silvia Mara Abrahão, Universidad Politecnica de Valencia, Spain Prof. Alain Abran, École de Technologie Supérieure / Université du Québec, Canada Dr. Klaas van den Berg, University of Twente, The Netherlands Dr. Luigi Buglione, ETS - Université du Québec / Nexen (Gruppo Engineering), Italy Manfred Bundschuh, DASMA e.V, Germany Thomas M. Cagley Jr., David Consulting Group Inc, U.S.A. Prof. Gerardo Canfora, University of Sannio, Italy Carol Dekkers, Quality Plus Technologies, Inc, U.S.A. Prof. Dr. Reiner Dumke, University of Magdeburg, Germany Dr. Christof Ebert, Vector Consulting, Germany Cao Ji, China Software Process Union, People Republic China Dr. Thomas Fehlmann, Euro Project Office AG, Switzerland Pekka Forselius, 4SUM Partners, Finland Dan Galorath, Galorath Inc, U.S.A. Harold van Heeringen, Sogeti Nederland B.V., The Netherlands Rob Kusters, Eindhoven University of Technology / Open University, The Netherlands Andy Langridge, riskHive Ltd, U.K. Dr. Nicoletta Lucchetti, SOGEI, Italy Sandro Morasca, Università dell’Insubria, Italy Pam Morris, Total Metrics, Australia Dr. Jürgen Münch, Fraunhofer IESE, Germany Marie O’Neill, COSMIC, Ireland Serge Oligny, Bell Canada, Canada Habib Sedehi, University of Rome, Italy Charles Symons, United Kingdom Frank Vogelezang, Sogeti Nederland B.V., The Netherlands Pradeep Waychal, Patni Computer Systems Ltd, India Hefei Zhang, Samsung, People Republic China i
  • 4. Proceedings 6th Software Measurement European Forum, Rome 2009 ii
  • 5. Proceedings 6th Software Measurement European Forum, Rome 2009 The Software Measurement European Forum 2009 is the sixth edition of the successful event for the software measurement community held previously in Rome in 2004, 2005, 2006 and 2007. In 2008 SMEF was located in Milan, one of the most important European business cities. Now we’re back in Rome. The two Italian market and competence leading companies, Istituto Internasionale di Ricerca (http://www.iir-italy.it) and Data Processing Organization (http://www.dpo.it), that have managed the previous successful editions, continued the collaboration with Ton Dekkers (Galorath International) in the Program Management. SMEF aspires to remain a leading worldwide event for experience interchange and knowledge improving on a subject, which is critical for the ICT governance. The Software Measurement European Forum will provide an opportunity for the publication of the latest researches, industrial experiences, case studies, tutorials, best practices on software measurement and estimation. This includes measurement education, methods, tools, processes, standards and the practical application of measures for management, risk management, contractual and improvement purposes. Practitioners from different countries will share their experience and discuss current problems and solutions related to the ICT Governance based on software metrics. As the 2009 event focus is chosen: “Making software measurement be actually used“ Although software measurement is a young discipline, it has quickly evolved and many problems have been adequately solved in conceptual frameworks. Available methods, standards and tools are certainly mature enough to be effectively used to manage the software production processes (development, maintenance and service providing). Unfortunately only few organisations, compared to the potential interested target, have brought the measurement process from theory to practice. Therefore the request for contributions related to implementation in order to analyse causes and possible solutions to the problem of measurement as a “useful but not used” capability. Papers are welcome to provide guidance to establish and actually using a roadmap for covering the distance from theory to practice. iii
  • 6. Proceedings 6th Software Measurement European Forum, Rome 2009 With this in mind, the Program Committee asked for papers focusing on the following areas: • Focus Area: Making software measurement be actually used. • How to integrate Software Measurement into Software Production Processes. • Measurement-driven decision making processes. • Data reporting and dashboards. • Software measurement and Balanced Scorecard approaches. • Service measurement methods (SLA and KPI). • Software measurement in contractual agreements. • Software measurement and standard process models (ITIL, ASL, ...). • Software measurement and Software Maturity Models (CMM-I, SPICE, Six Sigma, ...). • Software measurement in a SOA environment. • Functional size measurement methods (IFPUG and COSMIC tracks). • (Software) Measurement to control risks. • Software Reuse metrics. It’s quite ambitious to cover al of this, but we are confident that the presentations on the conference and the papers in the proceedings will cover most of the issues. The angles chosen vary from theory to practise and from academic to pragmatic. This interesting mix of viewpoints will offer you a number of potential (quick) wins to take back. Cristina Ferraroti Roberto Meli Ton Dekkers May 2009 iv
  • 7. Proceedings 6th Software Measurement European Forum, Rome 2009 TABLE OF CONTENT DAY ONE - MAY 28 KEYNOTE How to bring "software measurement” from research labs to the operational business processes Roberto Meli An empirical study in a global organisation on software measurement in Software 1 Maturity Model-based software process improvement Rob Kusters, Jos Trienekens, Jana Samalikova Practical viewpoints for improving software measurement utilisation 11 Jari Soini IFPUG function points or COSMIC function points? 25 Gianfranco Lanza v
  • 8. Proceedings 6th Software Measurement European Forum, Rome 2009 DAY TWO - MAY 29 Personality and analogy-based project estimation 37 Martin Shepperd, Carolyn Mair, Miriam Martincova, Mark Stephens Effective project portfolio balance by forecasting project’s expected effort and its 49 break down according to competence spread over expected project duration Gaetano Lombardi, Vania Toccalini Defect Density Prediction with Six Sigma 59 Thomas Fehlmann Using metrics to evaluate user interfaces automatically 67 Izzat Alsmadi, Muhammad AlKaabi Estimating Web Application Development Effort Employing COSMIC: A 77 Comparison between the use of a Cross-Company and a Single-Company Dataset Filomena Ferrucci, Carmine Gravino, Sergio Di Martino, Luigi Buglione From performance measurement to project estimating using COSMIC functional 91 sizing Cigdem Gencel, Charles Symons A ‘middle-out’ approach to Balanced Scorecard (BSC) design and implementation 105 for Service Management: Case Study in Off-shore IT-Service Delivery Srinivasa-Desikan Raghavan, Monika Sethi, Dayal Sunder Singh, Subhash Jogia FP in RAI: the implementation of software evaluation process 117 Marina Fiore, Anna Perrone, Monica Persello, Giorgio Poggioli KEYNOTE 129 Implementing a Metrics Program: MOUSE will help you Ton Dekkers APPENDIX Program / Abstracts 143 Author’s affiliations 159 vi
  • 9. Proceedings 6th Software Measurement European Forum, Rome 2009 vii
  • 10. Proceedings 6th Software Measurement European Forum, Rome 2009 viii
  • 11. Proceedings 6th Software Measurement European Forum, Rome 2009 Results of an empirical study on measurement in Maturity Model-based software process improvement Rob Kusters, Jos Trienekens, Jana Samalikova Abstract This paper reports on a survey amongst software groups in a multinational organisation. The paper presents and discusses metrics that are being recognised in the software groups. Its goal is to provide useful information to organisations with or without a software measurement program. An organisation without a measurement program may use this paper to determine what metrics can be used on the different CMM levels, and possibly prevent that the organisation makes use of metrics which are, not yet, effective. An organisation with a measurement program may use this paper to compare their metrics with the metrics identified in this paper and eventually add or delete metrics accordingly. 1. Introduction Although both CMM and its successor model CMMI address the measurement of process improvement activities on its different levels, it does not prescribe how software measurement has to be developed, and what kind of metrics should be used. In accordance with experiences from practice, software measurement requires well-defined metrics, stable data collection, and structured analysis and reporting processes [10], [4]. The use of measurement in improving the management of software development is promoted by many experts and in literature. E.g. the SEI provides sets of software measures that can be used on the different levels of the Capability Maturity Model, see [1]. In [7] and an approach is presented for the development of metrics plans, and also gives an overview of types of metrics (i.e. project, process, product) that can be used on the different levels of maturity. To investigate the status and quality of software metrics in the different software groups at Philips, its Software Process Improvement (SPI) Steering Committee decided to perform an empirical research project in the global organisation. The goal of this paper is to identify, on the basis of empirical research data, the actual use of metrics in the software development groups of the Philips organisation. The data have been collected in the various software groups of this organisation. The main research questions are respectively: • What types of metrics are recommended in literature regarding their usage on the different CMM-levels? • What types of metrics are used in software groups? • What types of metrics are used in the software groups on the different CMM levels? • How do the empirical research findings on metrics usage contribute to literature? In Section 2 of this paper relevant literature on the subject is addressed. Section 3 presents the approach that has been followed to carry out the survey. Some demographics of the survey are presented in section 4. Section 5 addresses the results of the survey and section 6 presents the conclusions. 1
  • 12. Proceedings 6th Software Measurement European Forum, Rome 2009 2. Usage of metrics versus the levels of the CMM: a literature overview As the maturity level of a software development organisation increases, additional development issues should be addressed, e.g. project management, engineering, support, and process management issues, see e.g. [1]. To monitor and control these issues, it would seem reasonable that these additional issues would prompt the usage of additional metrics. Based on this consideration a literature review on the relationship between CMM levels attained and the type of metrics used has been carried out. Surprisingly, only a few papers were found that deal with this issue. We will discuss them in the following of this section. In [8] a straightforward approach is taken that any software development organisation requires five core metrics: time, effort, size, reliability and process productivity. The authors don't address in their book a possible connection between number and type of metrics and the CMM-levels, however they state that these five core metrics should form a sufficient basis for control. From a CMM perspective 'control’ is a level 2 issue, and one could deduce, that in essence the five metrics are needed for monitoring and control on the so-called Managed level 2. However, also on the Initial level 1, organisations will at least aspire towards control, and these five metrics will also be useful on this level 1. Other authors do differentiate between metric usage and the CMM level attained. We will discuss a number of publications in the following. We would like to keep in mind the logical assumption, that when a particular type of metrics is advised for a certain level, it obviously is also useful at the higher CMM levels. The Software Engineering Institute, i.e. [1] provides guidelines for metric usage related to the maturity level attained. For level 1 no guidelines are provided. Towards level 2, project metrics (progress, effort, and cost) and simple project stability metrics are advocated. At level 2 recording of planned and actual results on progress, effort and cost are required. Towards level 3 ranges are added for the project metrics, and at level 4 so-called 'control limits' complete the total project control picture. At level 2 also product quality metrics are introduced, to which at levels 3 and 4 metrics are added to, similarly as with regard to the foregoing project metrics, to provide the options to go from recording or registration of product quality aspects towards explanation and control. Further, starting from level 2 computer resource metrics are advocated, and starting from level 3 training metrics. Surprisingly these computer resource and training metrics are not mentioned in any of the publications that we investigated. At level 5, finally, metrics are added that should indicate the degree of process change on a number of issues, together with the actual and planned costs and benefits of this process change. In [7] a direct line is drawn between the CMM level attained and the metrics that should be used. Metrics are to be implemented step by step in five levels, corresponding to the maturity level of the development process. For level 1 some elementary project metrics (such as product size and staff effort) are advised, in order to provide a baseline rate of e.g. productivity. These baseline metrics should provide a basis for comparison as improvements are made and maturity increases. Logically deducing metrics requirements from the characteristics of each level, they indicated further that level 2 requires project related metrics (e.g. software size, personnel effort, requirements volatility), level 3 requires product related metrics (e.g. quality and product complexity), and level 4 so-called process wide metrics (e.g. amount of reuse, defect identification, configuration management, etc.). No recommendations are provided for level 5. 2
  • 13. Proceedings 6th Software Measurement European Forum, Rome 2009 In [11] also some advice is provided and in essence the SEI approach is followed by these authors. However, the notion of base lining, advocated by [7], is adopted by them. So they advocate using baseline metrics at level 1, as what they refer to as post shipment metrics (e.g. defects, size, cost, schedule). At level 2 they, similar to [7], aim at project control, but also, similar to [1], they include product quality (e.g. defect) information. On levels 3 and 4 they provide similar advice as [1]. However, at level 4 they add base line metrics for process change, to support with baseline data the process change activities and associated metrics at level 5, e.g. on transfer of new technology. Some differences between these approaches that can be identified are: • In [7] the usage of baseline metrics for level 1 is advised, In [1] it is refrained from doing so, but [10] takes up this point. • In [7] product quality metrics are discussed starting from level 3, while in [1] this is started at level 2. • In [7] process wide metrics are introduced at level 4, while in [1] the authors wait till level 5. In [11] the same position as in [1] is taken, but adds to this baseline metrics at level 4. Based on these papers, we can deduce the following framework of assumptions for the usage of metrics on the different CMM-levels of maturity: • Level-1 organisations will focus on baseline metrics. • Level-2 organisations will add to this project related metrics and, according to [11] and [1], also product quality metrics. However the usage of the latter type of metrics on this level is not in accordance with [7]. • Level-3 will certainly contain product metrics, both on quality and complexity. • Level-4 is less clear. According to [7] 'process wide' metrics will be included. In [11] it is agreed on this, but only at a base line level. • Level-5 will definitely require process wide' metrics. Summarising it will be clear that we can expect the number and diversity of metrics to increase with the maturity level. From literature some survey based evidence is available. In [5] it is shown that elementary metrics, such as size, cost, effort en lead time are used at the lower maturity levels and also show that they precede the usage of more complex metrics, such as on product quality and review effectiveness at the higher maturity levels. Similar results were found in [2]. In [6] metric usage in high maturity organisations is looked at. Metrics regarding size, effort, schedule, defects and risk were uniformly found. With the exception of ‘risk’ these metrics correspond directly to the core metrics as identified in [8] and the baseline metrics as suggested by [11]. Surprisingly, although metrics were used for process analysis and improvement, in [6] no specific process improvement oriented metrics are mentioned. In this paper we will use survey data to compare these with our framework of assumptions. 3
  • 14. Proceedings 6th Software Measurement European Forum, Rome 2009 3. The survey The survey was carried out in the software groups of Philips. In these software groups, software is developed for a large product portfolio, ranging from Shavers to TVs to X-Ray equipment. Software size and complexity are increasing rapidly and the total software staff is growing continuously, from less than 2000 in the early 90s to more than 5000 now. More and more software projects require more than 100 staff-years located on multiple development sites. The software groups also have to deal with the fact that the share of software developed by third parties is rapidly increasing. Software Process Improvement (SPI) in Philips started in 1992. CMM is used for software process improvement (SPI), because CMM is well defined and the measurement of the process improvements is done by objective assessors, supported by jointly developed CMM interpretation guidelines [9]. Although Philips has switched to the successor model CMM Integration (CMMI), the data discussed in this paper refer to the original CMM. Within Philips, the software capabilities and achievements are measured at corporate level by registering the achieved CMM levels of the company’s software groups. As a result the measurements are comparable across the organisation and the software process capability (CMM) level of each individual software group is known. Philips developed a cross- organisational infrastructure for SPI. From the organisation perspective SPI is organised top- down as follows: Core Development Council, SPI Steering Committee, SPI Coordinators and Software Groups. From the communication perspective the following events and techniques can be recognised, respectively a formal Philips SPI Policy, a bi-annual Philips Software Conference, temporary Philips SPI Workshops, a Philips SPI Award, an internal SPI Web Site, and so-called SPI Team Room sessions. The research group used brainstorming sessions and discussions to develop a number of questions regarding metrics usage. We asked: "which of the following quality measurements/metrics do you currently use". For this a list of 17 predefined metrics was provided (with a yes/no scale) that has been derived from earlier research sponsored by the SPI Steering Committee of the global Philips organisation. To this list software groups could add their own metrics. A drawback of this approach was, that no direct link to previous surveys (e.g. as reported in [2]) could be established. We felt that this drawback was compensated by the usage of terms that were familiar to the participants. 4. Survey demographics This section gives briefly some survey demographics. The survey was aimed at group management and was completed by them. The number of responding software groups was 49 out of 74. This means a, very satisfactory, response of about 67 %. Table 1 shows the global distribution over the continents of the responding software groups. Table 2 presents the CMM-levels that have been achieved in the organisation. Table 1: Distribution of responding software groups over the continents. Continent Number of software groups Europe 32 Asia 8 America 9 Total 49 4
  • 15. Proceedings 6th Software Measurement European Forum, Rome 2009 Table 2: Number of software groups on the different CMM-levels. CMM- level Number of groups Average # of metrics reported 1 20 6.2 2 13 6.3 3 10 11.5 4 0 n.a. 5 3 18.0 Not reported 3 Total 49 Table 2 shows that more than 50 % of the software groups that completed the survey succeeded in leaving the lowest level one (initial software development) of the CMM. It is also clear, that the number of metrics reported increases substantially starting from level 3. 5. Results of the survey This section introduces and discusses the results of the survey. Apart from reacting to the predefined metrics, most respondents added (often significant numbers of) self described metrics. In order to be able to manage these results, the data were classified. The classification used, is derived from existing classifications such as presented in [1] and [7]. The two-level categorisation used is presented in table 3. The results of the survey in these terms are presented in table 4. Table 3: Categories of metrics. Categories Project Product Process Process Computer Training improvement utilisation Sub Progress Quality Quality Reuse n.a. n.a. categories Effort Stability Stability Improvement Cost Size Complexity Table 4: Frequencies per metric class. Categories Freq. Percent Subcategories Freq. Percent project 179 45.4 progress 80 20.3 effort 97 24.6 cost 2 0.5 product 190 48.2 quality 137 34.8 stability 15 3.8 size 38 9.6 process 8 2.0 quality 4 1.0 stability 4 1.0 process improvement 13 3.3 reuse 8 2.0 improvement 5 1.3 computer 0 0.0 utilisation 0 0.0 Training 1 0.3 training 1 0.3 Other 3 0.8 other 3 0.8 Total 394 100.0 total 394 100.0 5
  • 16. Proceedings 6th Software Measurement European Forum, Rome 2009 Three metrics were reported that fell outside this categorisation. These were ‘employee satisfaction’, ‘patents approved’ and ‘PMI’. Computer, with no metrics reported, and Training, with only a single metric, score extremely low. These can be discounted from the further analysis. The large majority of the other 391 metrics fall within the classes ‘project’ and ‘project’, which together account for 93,6%. Process and process improvement related metrics score low, with only 5.3% of metrics. According to [8], all organisations should at least collect the five core metrics time, effort, reliability, size, and productivity. Given that productivity can be seen as a combination of size and effort, table 5 shows the results on this requirement. An organisation scored on a category if at least one metric was present in that category. Table 5: organisations (number and percentage) collecting data on five core metrics. CMM Project Project Product Product Productivity All core level progress effort quality size metrics 1 14 (82%) 14 (82%) 13 (76%) 12 (71%) 11 (65%) 8 (47%) 2 11 (85%) 13 (100%) 9 (69%) 7 (54%) 7 (54%) 5 (38%) 3 10 (100%) 10 (100%) 10 (100%) 9 (90%) 9 (90%) 9 (90%) 5 3 (100%) 3 (100%) 3 (100%) 3 (100%) 3 (100%) 3 (100%) All level 5 and nearly all level 3 organisations fulfill this requirement. However, less than half of level 1 and level 2 organisations do so. The main missing metric seems to be product size. However, size can always be measured retroactively and sufficient automated support is available to support this. Taking this into account, all level 3 and 5 organisations, most (69%) of level 2 organisations, and about half (53%) of level 1 organisations fulfill this requirement. Apparently, starting from level 2, this requirement is largely met, providing these organisations sufficient (at least according to [8]) information for control. A next interesting step, is to compare the type of metric collected, with the requirements as formulated in the literature survey presented above, i.e. [1], [11] and [7]. According to this we would expect (given that we have no data for level 4): • Level-1 organisations will focus on baseline metrics. • Level-2 organisations will add to this project related metrics and, according to [11] and [1], also product quality metrics. • Level-3 will contain product metrics, both on quality and complexity. • Level-5 will also require process wide metrics. Table 6 contains numbers of metrics found per sub-category and per CMM-level, and can be used to check these expectations. 6
  • 17. Proceedings 6th Software Measurement European Forum, Rome 2009 Table 6: Number of metrics per sub-category per CMM-level. Metric subcategory CMM level 1 2 3 5 Total project progress 20 24 27 6 77 project effort 28 27 23 13 91 project cost 0 1 1 0 2 product quality 43 17 47 18 125 product stability 5 2 7 0 14 product size 14 8 9 3 34 process quality 0 0 0 4 4 process stability 3 1 0 0 4 process reuse 3 3 2 0 8 process improvement 1 0 1 3 5 Total 117 83 117 47 364 It is clear that level-1 organisations do not adhere to the expectations as formulated in literature. We see a wide spread usage of all types of metrics. If we combine this with the less than stellar adherence to the principle of five metrics, as formulated in [8], we can conclude, that level-1 one organisations usage of metrics is in line with its CMM-level. It is ‘ad hoc’ or even ‘chaotic’. The lack of project control at this level apparently spills over in the lack of consideration of which metrics to use. At level 2 we see a more concentrated usage of metrics. As expected, many metrics focus on project matters. We also see a significant number of product metrics. This fits in with requirements as provided in [1] and at first glance contradicts those made in [7]. However, the CMM model is a growth model. It is to be expected that organisations at level 2 are involved in a transition process towards level 3. Usage of product metrics can therefore also be seen in this light, supporting the assumptions made in [7]. And given the relatively low number of product metrics encountered (just over one per organisation), this explanation may seem more likely. Some scattered process metrics are found, but in such low numbers (less than 0.25 of this type of metric per organisation) that we feel we can say that on average level 2 organisations select metrics in a such a way that is proper for their position in the CMM model. This is supported by the high level of usage of the five core metrics as advocated in [8]. Looking at level 3 organisations a similar picture can be seen. We see extensive usage of project metrics. As can be expected for level 3, usage of product metrics is also extensive. This is exactly in line with what would be expected at this level. However, somewhat more surprising, almost no process metrics are reported. If we take the line of reasoning mentioned before, that level x organisations will be on the road to level x+1, and therefore report metrics that fit in with this higher level, a somewhat higher number would be expected. This can mean that the participating organisations have just reached level 3 and have not yet started the next step or that level-3 is considered a sufficient target and no further progress is envisaged. Data show, that the average level-3 organisation has had a SPI program in place for 6 years on average. All process metrics that were encountered, were found in those organisations that had their SPI program in place for 8 years on average. The remainder had their program in place for 5 years on average. This would seem to suggest that the first explanation is the most likely one. All level three organisations gather the core metrics as suggested in [8]. All together this suggests that level 3 organisations, as do level 2 7
  • 18. Proceedings 6th Software Measurement European Forum, Rome 2009 organisations, follow a well considered approach towards metric usage that fits in with their position in the CMM maturity model. No data are available for level 4 organisations. In [7] usage of reuse metrics at this level is indicated. In the survey, reuse metrics where included as part of the 17 pre-defined metrics. Eight organisations indicated that they used this type of metric. Three of these were level 1, three, level 2 and two, level 3. None of these were level 5, as would have been expected according to [7]. Apparently usage of reuse metrics is not a characteristic of level 4 organisations. Only limited data are available for level 5 organisations. However these data suggest a similar result as was obtained at level 3. Metrics usage almost doubles for level 5 organisations, in line with the results as reported in [6]. And although only a few process metrics are reported, two out of three do use them. Furthermore, this limited number of organisations does account for one third (7) of all (21) process related metrics. On average each level 5 organisations uses 2.3 process related metrics, as opposed to 0,3 process related metric per organisation for the other organisations. And, as in [6] is suggested, considerate usage of project and product metrics can provide insight in the development of the process. Finally, all level 5 organisations collect data for the five core metrics. All this again suggests a carefully considered usage of metrics by these organisations. In absolute numbers the amount of process related metrics (21) is small. Only 5.3% of metrics fall into this category. A higher number would have been expected, especially for the organisations from levels 3 and 5. It is true, that level 5 organisations do show a higher number. On average each level 5 organisations uses 2.3 process related metrics, as opposed to 0.3 for the other organisations. This definitely shows that this type of metric is used more at higher levels of maturity, but still this is not a very high usage. 6. Conclusions The main results of the analysis are as follows: • On each of the CMM levels, software groups make use of a quite large number of project metrics (i.e. to measure progress, effort and cost). • Software groups on CMM level 1 (the Initial process level) make, or try to make, use of a large diversity of metrics also from the categories product and process. This is in contradiction with recommendations from literature which stress the usage of project metrics (mainly to provide base line data as a start for further improvement). • On CMM level 2 (the Managed process level) software groups are definitely restricting themselves to project metrics; the usage of product metrics shows a sharp decrease. Obviously software groups on this level know better what they are doing, or what they should do, i.e. project monitoring and controlling. As a stepping stone towards level three, also a number of product quality metrics can be found here. • Software groups on CMM level 3 (the Defined level) clearly show a strong increase in product quality measurement. Obviously, this higher level of maturity reached enables software groups to effectively apply product metrics in order to quantitatively express product quality. • Remarkable is that the analysis of the metrics showed that software groups on CMM level 3 and above, don't show a strong increase in the usage of process related metrics. The main instruments in software measurement to reach a satisfactory maturity level seem to be the project control and product quality metrics. 8
  • 19. Proceedings 6th Software Measurement European Forum, Rome 2009 The research shows a clear emphasis on project progress and product quality metrics in software groups that are striving at a higher level of maturity. Interesting is also to see that on each of the first three CMM levels software groups make use of about the same high amount of project metrics. So project control is important on all levels. Surprisingly, and in contradiction with literature, process metrics are hardly used (even by software groups on the higher CMM levels). 7. References [1] Baumert J.H., M.S. McWhinney, Software Measures and the Capability Maturity Model, Software Engineering Institute, Technical Report CMU/SEI-92-TR-25, Carnegie Mellon University, Pittsburgh, 1992 [2] Brodman, J.G. and Johnson, D.L., Return on investment from software process improvement as measured by US industry, Crosstalk, april 1996. [3] Ebert C., Dumke R., Software Measurement, Establish, Extract, Evaluate, Execute, Springer-Verlag Berlin Heidelberg, 2007. [4] El-Emam. K., Goldenson. D., McCurley. J., Herbsleb. J., 2001. Modeling the Likelihood of Software Process Improvement: An Exploratory Study. Empirical Software Engineering, Vol. 6, Nr 3, pp 207 – 229. [5] Gopal, A, Krishnan, M.S., Mukhopadadhyay, T, and Goldenson, D., Measurement programs in software development: determinants of success, IEEE Transactions on Software Engineering, vol. 28, no. 9. Pp. 863-875, 2002. [6] Jalote, P., Use of metrics in high maturity organisations, Software Quality Professional, Vol. 4, No. 2, March 2002, pp. 7-13. [7] Pfleeger, S. L. and McGowan, C., Software Metrics in the Process Maturity Framework. Elsevier Science Publishing Co., Inc., 1990. [8] Putnam L.H., Myers, W., Five Core Metrics: The Intelligence Behind Successful Software Management, Dorset House Publishing, 2003 [9] SEI: Software Engineering Institute, Carnegie Mellon University, June 1995. The Capability Maturity Model: Guidelines for Improving the Software Process, Addison Wesley Professional, Pages: 464, Boston, USA. [10] Trienekens J.J.M., 2004. Towards a Model for Managing Success Factors in Software Process Improvement. Proceedings of the 1st International Workshop on Software Audit and Metrics (SAM) during ICEIS 2004, Porto. Portugal, pp. 12-21. [11] Walrad C, Moss E., Measurement: the key to application development quality. IBM [12] Systems Journal 32[3], 445-460. Riverton, NJ, USA, 1993. 9
  • 20. Proceedings 6th Software Measurement European Forum, Rome 2009 SMEF 2009 10
  • 21. Proceedings 6th Software Measurement European Forum, Rome 2009 Practical viewpoints for improving software measurement utilisation Jari Soini Abstract An increasing need for measurement data used in decision-making on all levels of the organisation has been observed when implementing the software process, but on the other hand measuring the software process in practice is utilised to a rather limited extent in software companies. Obviously there seems to be a gap between need and use on this issue. In this paper we present the viewpoints on measurement issues of Finnish software companies and their experiences. Based on the empirical information obtained through interview sessions, this paper discusses the justification as well as potential reasons for challenging measurement utilisation in the software process. The key factors are also discussed that, at least from the users’ perspective, should be carefully taken into account in connection with measurement and also some observed points that should be improved in their current measurement practices. These results can be used to evaluate and focus on the factors that must be noted when trying to determine the reasons why measurement is only utilised to a limited extent in software production processes in practice. This information could be useful when trying to solve the difficulties observed in relation to measurement utilisation and thus advancing measurement usage in the context of software engineering. 1. Introduction In the literature there are a lot of theories and guides available for establishing and implementing measurement in a software process. However, in practice difficulties have been observed in implementing measurement in the software development process [1],[2] and there is also evidence that measurement is not a particularly common routine in software engineering [3],[4]. These difficulties are partly due to the nature of software engineering as well as people’s attitude to measurement overall, but there may also be a lack of knowledge related to measurement practices, measurement objects, and also the metrics themselves. Very often difficulties arise when trying to focus the measurement and also when trying to use and exploit the measurement results. In many cases it is unclear what should be measured and how and also by whom the measurement data obtained should be interpreted [5]. This paper presents the viewpoints of software organisations – the advantages and disadvantages - related to the setting up or use of software measurement in the software engineering process. Based on empirical information obtained through interviews, this paper discusses the potential issues challenging measurement utilisation in the software process. The aim of this study is to give empirical experiences of which factors are felt to be important in measurement implementation. The empirical research material used in this examination was obtained from a software research project [6] carried out between 2005 and 2007 in Finland. The aim of the two-year-long research project was to investigate the current status and experiences of measuring the software engineering process in Finnish software organisations and to enhance the empirical knowledge on this theme. The overall goal of the research was to advance and expand understanding and knowledge of measurement usage, utilisation, and aspects for its development in the software process. In this paper, we focused on the interviews performed during the research project and the interpretation of the results from these interviews. 11
  • 22. Proceedings 6th Software Measurement European Forum, Rome 2009 This empirical information gives us some hints as to which are the important issues when designing, establishing, and implementing measurement in the context of software engineering. During the past years, many research projects and empirical studies including experiences of measurement utilisation in software engineering have been carried out. Research closely related to this subject in the software measurement field has been performed [7],[8],[9],[10],[11],[12],[13]. In these studies the measurement practices and experiences in software organisations was examined empirically. The main outcome of many of the studies has been practical guidelines for managing measurement implementation and usage in software engineering work. However, this issue still seems to require further examination. The structure of this paper is as follows: Section 2 describes the research context. In section 3, the scope and method used in the empirical study are described. This is followed by section 4, in which the interview results are presented and, in section 5, a discussion and evaluation of the key findings of the study are made. Finally, in section 6, a summary of the paper is presented with a few relevant themes for further research. 2. Research context In qualitative research there is a need for a theory or theories against which the captured data can be considered. Below is a short overview of the purpose of use and utilisation of measurement in business overall and, in particular, in the context of software engineering. It is well known that measurement can be utilised in various ways for many different purposes in business. In general, measurement data provides numerical information, which is often essential for comparing different alternatives and making a reasoned selection among the alternatives. McGarry et al. [14] listed typical reasons for carrying out measurement in business: • Without measurement, operations and decisions can be based only on subjective estimation. • To understand the organisation’s own operations better. • To enable evaluation of the organisation’s own operations. • To be better able to predict what will happen. • To guide processes. • To find justification for improving operations. From the software engineering perspective, a few fundamentals can be pointed out that are considered the key benefits when discussing the utilisation of measurement. In practice, the main purpose of software engineering measurement is to control and monitor the state of the process and the product quality, and also to provide predictability. Therefore, more software engineering-related reasons for measurement are: (e.g. [4],[15],[16]) • To control the processes of software production. • To provide a visible means for the management to monitor their performance level. • To help managers and developers to monitor the effects of activities and changes in all aspects of development. • To allow the definition of a baseline for understanding the nature and impact of proposed changes. • To indicate the quality of the software product (to highlight quality problems). • To provide feedback to drive improvement efforts. 12
  • 23. Proceedings 6th Software Measurement European Forum, Rome 2009 There is no doubt that measurement is widely recognised as an essential part of understanding, predicting and evaluating software development and maintenance projects [17],[18],[19]. However, in the context of software engineering, its utilisation is accepted as a challenging issue [1],[2],[4],[19]. Although information relating to the software engineering process is available for measurement during software development work and, in addition, there are plenty of potential measurement objects and metrics related to this process, the difficulties related to measurement utilisation in this context are considerable. Due to the factors described before, it must be assumed that the basic issue with measurement utilisation is not the lack of measurement data, so in all likelihood there must be some other reasons for this phenomenon. The following section presents one example of how this issue has been studied empirically and also the type of answers that were obtained to explain this phenomenon. 3. Empirical study – Software Measurement project This study is based on the results of the research project that was carried out between 2005 and 2007 in Finland. The starting point of the SoMe (Software Measurement) project [6] was to approach software quality from the software process point of view. The approach selected for examining the recognised issues in relation to software quality was an empirical study, in which the target was to examine measurement practices related to the software process in Finnish software companies. The aim and scope of the project was to examine the current metrics and measurement practices for measuring software processes and product quality in software organisations. The findings presented in this paper are based on the information captured with personal interviews and a questionnaire format during the research project. 3.1. Scope of the research project There were several limitations related to the research population and unit when implementing the study. Firstly, the sample was not a random set of Finnish software companies. The SoMe project was initiated by the Finnish Software Measurement Association (FiSMA) [20] and two Finnish universities, the Tampere University of Technology (TUT) [21], and the University of Joensuu (UJ) [22]. In this study the population was limited so that it covers software companies who are members of FiSMA (which consists of nearly 40 Finnish software companies, among its members are many of the most notable software companies in Finland). Secondly, inside the selected population (FiSMA members) too, the sample was not randomly selected. Instead, purposive sampling was used in this study, which means that the choice of case companies was based on the rationale that new knowledge related to the research phenomenon would be obtained (e.g.[23],[24]). This definition was based on the objective of the SoMe research project, which was primarily to generate a measurement knowledge base consisting of a large metrics database including empirical metrics data [6]. In this case we approached companies that had previous experience with software measurement. The most potential companies who perform software engineering measurement activities were carefully mapped in close co-operation with FiSMA. All the participant companies volunteered and the number of the set was limited to ten companies, based on the selected research method (qualitative research), project schedule and research resources. The third limitation relates to the research unit. The target group selected within the company was made up of persons who had experience, knowledge, and understanding of measurement and those responsible for measurement operations and the metrics used in practice. From the research point of view, the research unit was precisely these people in the company. 13
  • 24. Proceedings 6th Software Measurement European Forum, Rome 2009 These activities described above generally come under the job description of quality managers, which was the most common position among the interviewees (see [25]). 3.2. Participants of the research project The companies involved in the study were pre-selected by virtue of their history in software measurement and voluntary participation. The people interviewed were mainly quality managers, heads of departments and systems analysts, i.e. persons who were responsible for the measurement activities in the organisation. A total of ten companies participated in this study. Three of them operate in the finance industry, two in software engineering, two in ITC services, two in manufacturing and one in automation systems. Six of the companies operate in Finland (Na) only, the other four also multi-nationally (Mn). Table 1 below shows some general information related to the participant companies. The common characteristic shared by these companies is that they all carry out software development independently and they mainly supply IT projects. Table 1: Description of participating companies. Company Business of the SW SW National or Employees Employees SPICE company Business Application Multinational total SE Level A Finance industry, ICT Customer Business Systems (BS) Na 220 220 3 B Finance industry, ICT Customer Business Systems Na 280 150 2 C Finance industry, ICT Customer Business Systems Na 450 35 2 D Finance industry, ICT Customer Business Systems Na 195 195 3 E SW engineering, ICT Customer, Product BS, Package Mn 15000 5000 2 F SW engineering, ITC Customer Business Systems Na 200 60 2 G ITC services Customer Business Systems Na 200 200 3 H Manufacturing Customer, Product Emb Syst, Package Mn 1200 30 1 I Manufacturing Product Emb Syst, Real-Time Mn 24000 200 2 J Automation industry Product Real-Time Mn 3200 120 3 Based on the captured information, measurement practices and targets varied slightly between the companies. Most of them had used measurement with systematic data collection and results utilisation for about 10 years, but there were also a couple of companies who had only recently started to adopt more systematic measurement programs. The contact people inside the companies were mainly quality managers, heads of departments and systems analysts, i.e. persons who were responsible for the measurement activities in the organisation. 3.3. Research method used The nature of the method selected in the SoMe research project was strongly empirical and the research approach selected was a qualitative one based on personal interviews, combined with a questionnaire. This method was used in order to collect the experiences of the companies about the individual metrics used and also opinions and viewpoints about measurement, based on their own experiences. To be exact, in these interview sessions we used a semi-structured theme interview base as a framework. We used the same templates in all interview sessions: one form to collect general information about the company and its measurement practices, and another spreadsheet-style form to collect all metrics the company uses or has used (see [25]). The use of the interview method can be justified by the research findings made by Brown and Duguid [26]. They posit that there is a parallel between knowledge sharing behaviour and other social interaction situations. Huysman and de Wit [27] and Swan et al. [28] have also come to similar conclusions in their research. Therefore, it was decided to use a research method focusing on personal interviews at regular intervals with participants instead of purely a written questionnaire. It is distinctive of qualitative 14
  • 25. Proceedings 6th Software Measurement European Forum, Rome 2009 research that space is often given to the viewpoints and experiences of the interviewees and their attitudes, feelings, and motives related to the phenomenon in question [29]. These are difficult, or even impossible to clarify by quantitative research methods. With the interviews we wanted particularly to examine the individual experiences and feelings related to the research theme. Using this approach, the aim is not to generalise. Instead, with face-to-face interviews, our aim is to get the views of the persons about the focal factors and issues related to measurement implementation. With these interview sessions we tried to identify the users’ positive and less positive experiences, noted critical issues, and also improvement and development issues as to the current situation regarding measurement. 4. Interview results The following section highlights the interview results from the stance in which we are particularly interested in this study. This paper presents the respondents’ opinions about the four particular themes included in our interview base. The captured information taken from the interviews was carefully analysed by the research team and the results are presented below in a condensed form. The opinions are divided into four theme categories as follows: Positive experiences of measurement usage: • Real status determined through measurement. • Measurement makes matters transparent. • Scheduling and cost adherence become notably better. • Resource allocation can be clarified. • Key issues emphasised using measurement. • Trends visible in time series. • Supports decision-making. • Motivates employees to work in a better and more disciplined way. • Agreed metrics help when planning company goals and operations. • Process improvement achieved through measurement. • Tool for monitoring effects of SPI actions. • Supports quality system implementation. • Practical means for integrating a company which has grown through acquisitions and creating a coherent organisation culture. • Good way of observing trends and benchmarking. • Can inspire confidence in new customers by presenting measurement data. Negative experiences of measurement usage: • Resistance to change at the beginning. • People in other departments do not see the benefit of measurement. • Measurement often suffers from a negative image (monitoring). • Easily viewed as extra work. • Mainly viewed negatively if the metrics are linked to an employee bonus system. • Measurement may underline and guide certain operations too much. • Benefits are quite difficult to see in the short term and also on a personal level. • Management sets metrics without in-depth knowledge of the trends. • Concrete utilisation of the metrics is difficult. • Interpreting the measurement results may be difficult. • When trying to use without having necessary measurement skills. • Measurement without utilising its results is a waste. 15
  • 26. Proceedings 6th Software Measurement European Forum, Rome 2009 Observed issues for improvement and development: • Measurement results should be made available to a wider audience - wider publication and transparency. • Insufficient analysis and utilisation of measurement results. • Measurement utilisation in operative management. • Developing and implementing the measurement process. • Assessing realistic measurement goals is challenging. • Determining how to ensure a particular metric is utilised. • Extent of measurement (do we measure enough and the right factors?). • Finding and defining good metrics. • Lack of forward-looking real time metrics (need for real time measurement. • Lack of measurement of remaining work effort. • Need to improve the measurement related to the area of change and requirement management. • Evolving the utilisation of limit areas. • Delivery of top-quality measurement. • Improving testing of data utilisation: should measure exact causes and “originating phase” of defects. • Increasing the reliability of the collected measurement data. • Improving performance measurement. Critical issues in utilising and implementing measurement: • Demand for management commitment and discipline of the whole organisation. • There should be a versatile and balanced set of metrics. • Enough time must be reserved to analyse measurement results and also to make the necessary rework. • Feedback on measurements must be given to all the data collectors, to sustain motivation. • Benefits of the measurement must be indicated in some way, if not, motivation will fade. • Capturing measurement information must be integrated in normal daily work. • Measurement must be automated as much as possible, so that data capture is not too time-consuming. • Trends are important, as it takes time to notice process improvement. • Should be a large number of metrics, which combine to give accurate status. • Avoid placing too much weight and over-emphasis on one single measurement result. • Establishing of reliable metrics and their implementation is time-consuming. These were the summarised experiences of the themes discussed in this study. As a result of the information captured on the theme in the interviews, a few key factors have arisen. It seems that the interviewees rated measurement issues that relate closely to the human aspect as extremely important factors in the measurement context (e.g. utilisation and publishing, purpose of use, measurement benefits, feedback, response, etc.). Other issues that were emphasised relate to the skills and knowledge required for using the metrics and measurement (e.g. interpreting, analysing and utilising the collected measurement data, measurement focus, etc.) and also difficulties in metrics selection (amount, balance, measurement objects, etc.). 16
  • 27. Proceedings 6th Software Measurement European Forum, Rome 2009 As is well known, measurement itself is a process that requires sufficient resources and know-how of software development work and the developed software itself [30],[31]. In the following sections the observations related to the human aspect are examined more closely. These interview results are discussed based on the current measurement practices in the participant organisations and also in relation to previous research results. 5. Discussion This section deals more extensively with one of the key factors that arose repeatedly during the interview sessions – the human aspect. This observation supports the widely accepted concept that there is a strong emphasis on the human nature of software engineering work [32],[33],[34],[35],[36]. Next, the interview results are compared and discussed based on the information that is derived from current measurement practices in the participant organisations. 5.1. Prevalent measurement usage from the human perspective In one part of the SoMe project, detailed information was collected on the current metrics used in the participant companies. The methods of collecting, analysing and classifying the research material as well as the entire research process have been described in previous research papers [6],[37]. Briefly, after the captured measurement data was organised and evaluated, and the necessary eliminations were made, 85 individual metrics were left which fulfilled the definition of the research scope. The following evaluation related to results from the empirical study deals with the measurement users’ perspective. This examination includes the results that enabled us to see who the beneficiaries of the measurement results were in practice. For this evaluation, the metrics were classified according to three user groups: software/project engineers; project managers; and upper management (management above project level including, but not restricted to, quality, business, department and company managers). In each group the metrics were further divided into two segments: metrics that are primarily meant for the members of that group, and those that are secondarily used by that group after some other group has had access to them (according to the order given on the questionnaire form, primary or secondary users of the metric may be from more than one user group). Figure 1 (below) presents the users of the measurement data, based on their position within the organisational hierarchy. 90 80 17 70 Number of Metrics 60 50 Secondary 40 Primary 30 16 66 20 16 10 22 10 0 Software engineers Project managers Upper management Measurement Data User Groups Figure 1: Distribution of measurement results and their users in the company organisation. 17
  • 28. Proceedings 6th Software Measurement European Forum, Rome 2009 With the captured metric data, the users of the measurement data and their position within the organisational hierarchy can be recognised (see Figure 1). It is clear that the majority of measurement data produced benefits for upper management. 98% (83/85) of all the data produced by 85 metrics comes to upper management, 78 % (66/85) primarily. Project managers are in the middle with 38 metrics, but 58 % (22/38) of them are primary recipients. Considerably fewer metrics are targeted to the engineering/ development level. Software engineers benefit from only 26 metrics, and of those only ten are primarily intended for them. Moreover, according to the study results, roughly 75% (66/85) of the data collecting activities were carried out primarily by developers or project managers. As Figure 1 clearly shows, the captured measurement data focuses strongly on use by a certain user group. In this case, there is obviously an imbalance of who receives the results and how the metrics are selected for communication in the organisation. Moreover, this phenomenon is confirmed by previous studies. Metrics designated for upper management are fairly common in software engineering organisations (e.g. [14],[38],[39]). Next this result and its possible effects are evaluated from the measurement perspective. 5.2. The human factor – critical for measurement success The next section highlights the focal - human-related - issues of which one should be aware when implementing measurement in the context of software engineering, based on the empirical results obtained. After analysing and evaluating the captured qualitative data, the following key factors seem to be significant enablers from the measurement implementation viewpoint. Interpreters and analysers of measurement data Regarding measurement data analysis, it must be recognised that the data collected has no intrinsic value per se, but only as the starting point of an analysis enabling the evaluation of the state of the software process. The final function of the measurement process is precisely to analyse the collected measurement data and draw the necessary conclusions. According to Dybå [8], measurement is meaningless without interpretation and judgment by those who make the decisions and take actions based on them. The human aspect relates closely to this issue of measurement data processing. According to Florac and Carleton [40], whatever level is being measured, problems will occur if the analysers of the measurement data are completely different people from those who collected the measurement data. Van Veenendaal and McMullan [41], as well as Hall & Fenton [42], share this view and state that when collecting or analysing measurement data, it is fundamental that the participants are those who have the best understanding of the measurement focus in question. Being aware of this is helpful when establishing and planning new software process measurement activities, especially from the perspective of analysing the measurement results. Based on the results of the study, we can state that the under-representation of engineer-level beneficiaries and the uneven distribution of measurement information very probably have detrimental effects: if not for the projects or software engineering, then at least for the validity and reliability of the measurements and their results. This result (Figure 1) supports the need that arose during the interviews for improving the analysis of measurement data. Measurement data feedback Feedback including utilisation of the collected measurement data for all organisation levels is paramount. As Hall and Fenton [42] identified, the usefulness of measurement data should be obvious to all practitioners. 18
  • 29. Proceedings 6th Software Measurement European Forum, Rome 2009 Therefore, it is important that the measurement results are, of course, where appropriate, open, and available to everybody in all levels of the organisation. It should be noted that appropriate measurement supports managerial and engineering processes at all levels of the organisation [43],[44]. Therefore, it is important that measurement and metric systems are designed by software developers for learning, rather than by management for controlling. As well as managers, measurement provides opportunities for developers to participate in analysing, interpreting, and learning from the results of measurements and to identify concrete areas for improvement. The most effective use of measurement data for organisational learning, and also process improvement, is to feed the data back in some form to the members of the organisation. According to the results of this study, roughly 75% (66/85) of the data collecting activities were carried out primarily by developers or project managers. Research would indicate that they expect feedback on how the collected measurement data is utilised [13]. In the interviews, the feedback issue did not come up. This may be due to the fact that all those interviewed belonged to upper management, which benefited from 98% of the collected measurement data. However, there is evidence that this feedback issue seems to be problematic. The lack of feedback and thus evidence of the usefulness of collected measurement data has been widely observed (e.g. [13],[42],[45],[46]). The literature contains many cautionary examples of measurement programs failing because of insufficient communication, as the personnel is unable to see the relevance or benefit in collecting data for the metrics [13],[16],[47]. People should be made aware that when collecting measurement data, it is also essential to organise and produce instructions for steering the feedback channel, and also to give feedback to everyone or at least to those who had gathered or supplied the data. Success in giving feedback is indeed recognised as an important human-related factor in sustaining motivation for measurement [8],[46]. Reaction to the measurement results There is yet another important human-related issue to highlight concerning measurement utilisation: the reaction to the measurement results. This is one of the most crucial issues from the motivation perspective in relation to measurement data collecting work. One of the significant factors for success in measurement is the motivation of the employees to carry out measurement activities. One of the most challenging assignments is to implement measurement in a manner that has the most positive impact on each of the projects within the organisation [14]. If people can see that the measurement data they had collected was firstly analysed and then, if necessary, led to corrective actions, the motivation for colleting the measurement data will be sustained. However, as in the previous issues, there is also evidence of a lack of management of this too. According to the results of a large study [7] made by the Software Engineering Institute at Carnegie Mellon University (software practitioners, 84 countries, 1895 responses) only 40% of all respondents reported that corrective action is taken when a measurement threshold has been exceeded and close to 20% of respondents reported that corrective action is rarely or never taken when a measurement threshold is exceeded. Furthermore, the studies carried out by Solingen and Berghout [19], Hall et al. [13] and Mendonca and Basili [48] support this observation. In these interviews (see section 4.1), the observations that came up closely relate to this issue. These included the fact that the concrete utilisation of the metrics and also interpretation of the measurement results are often considered difficult, that measurement without utilising its results is a waste, and that there is a need for wider publication and transparency of the measurement results, etc. These are examples of issues that can be linked to a lack of reaction to measurement results and thus decreased motivation to carry out measurement. 19
  • 30. Proceedings 6th Software Measurement European Forum, Rome 2009 The factors presented above tend to be considered essential from the human perspective in the context of measurement in software engineering. The issues described - the beneficiaries, analysis and also the appropriate organisation of the collected measurement data - must be taken into consideration in connection with measurement. Previous studies support these observations, derived from interpretation of the data captured during the qualitative study. This confirms the reliability of the study results. In conclusion, it could be stated that paying attention to these issues may enhance measurement usage in software engineering work. 5.3. Evaluation of the study results In qualitative research, such as the interviews in this case, subjective opinion is allowed and subjectivity is unavoidable [49]. With qualitative material, generalisations cannot be made in the same way as in quantitative studies. In this way it should be noted that generalisations can only be made on the grounds of interpretation, not directly on the grounds of the research material [29]. Therefore, the most challenging part of this approach from the researcher’s point of view is data interpretation and the subsequent selection of the key observations and themes that came up in the research material. Finally, it is very much a question of the researchers’ interpretation of the results that arose that they consider to be the most significant. If one looks for possible weaknesses or bias in qualitative research regarding reliability, it could be said to lie in the phase of analysing and interpreting the research material. However, the results of the previous studies support the observations presented here. On the other hand, because the starting point of the study was the SoMe project, which was designed to capture experience-based measurement data, it could be said that the sample does not correspond well to the population from the aspect of software engineering measurement activity. Therefore it must be assumed that the participant companies were more measurement-oriented than the average population. This, together with them being members of FiSMA, may create a bias in results, as these companies are more likely to perform measurement than the Finnish software industry in general. From the internal validity point of view this means that the results cannot be generalised to represent normal practice in the population. However, the results of this study can be applied when examining the current measurement practices and metrics used in a similar population and a similar research focus. The statistical examination relating to the study result was omitted, because of the qualitative research method used and also due to the limited amount of research data. A statistical generalisation of the results cannot be made with the collected data set (e.g. [50]). As stated by Briand et al. [51], the sampling of too small a dataset leads to the lack of statistical authority. However, a more detailed validity and reliability examination is given related to the research carried out and also its results is in the doctoral thesis [25] of which the research material presented here is the focal data. 6. Conclusion This article dealt with the possible reasons for the observation that measuring the software process in practice is utilised to a rather limited extent in software companies. The empirical data used in this study was based on a software research project on the Finnish software industry’s measurement practices. With the help of empirical information obtained through interview sessions, this paper presented the viewpoints on measurement issues of a set of Finnish software companies and their experiences. Based on the study results, this paper mapped out and discussed the potential reasons for the challenges facing measurement utilisation in the software process. 20
  • 31. Proceedings 6th Software Measurement European Forum, Rome 2009 This paper focused on the human factor, which was strongly emphasised in the interviews. Based on the empirical results obtained, a set of human-related issues arose which one should be aware of and also take into account when implementing measurement in the context of software engineering. The results showed that the users felt these issues to be the most crucial when implementing measurement. The observations made in this study pointed out where measurement needs to be focused and improved. The interview results also exposed some points that should be improved in current measurement practices. These results can be used to evaluate and focus on the factors that must be noted when trying to determine the reasons why measurement is only utilised to a limited extent in software production processes in practice. In conclusion, the empirical information presented in this paper can help when establishing or improving measurement practices in software organisations. The information provided by the empirical study enables us to recognise where the critical issues lie in measurement implementation. This may help us to take a short step on the road toward extended utilisation of measurement in software engineering. The results also showed some of the potential directions for further research on this theme. One interesting issue for future research would be to examine what really happens after the measurement data is analysed and how the response process is organised in practice in these organisations. 7. References [1] Bititci, U. S., Martinez, V., Albores, P. and Parung, J., “Creating and managing value in collaborative networks”, International Journal of Physical Distribution and Logistics Management, vol. 40, no. 3, 2004, pp. 251-268. [2] Testa, M. R., “A Model for organisation-based 360 degree leadership assessment”, Leadership and Organisation Development Journal, vol. 23, no. 5, 2002, pp. 260-268. [3] Ebert, C., Dumke, R., Bundschuh, M. and Schmietendorf, A., “Best Practices in Software Measurement: How to Use Metrics to Improve Project and Process Performance”, Springer, 2004. [4] Fenton, N. and Pfleeger, S., “Software Metrics: A Rigorous & Practical Approach - 2nd edition”, International Thompson Computer Press, 1997. [5] Kulik, P. (2000), “Software Metrics State of the Art 2000”, KLCI Inc., 2006, http://www.klci.com. [6] Soini, J., “Approach to Knowledge Transfer in Software Measurement“, in International Journal of Computing and Informatics: Informatica, vol. 31, no. 4, Dec 2007, pp. 437-446. [7] Kasunic, M., “The State of Software Measurement Practice: Results of 2006 Survey”, Technical Report CMU/SEI-2006-TR-009, ESC-TR-2006-009, Software Engineering Institute (SEI), 2006. [8] Dybå, T., “An Empirical Investigation of the Key Factors for Success in Software Process Improvement”, IEEE Transactions on Software Engineering, vol. 31, no. 5, 2005, pp. 410-424. [9] Card, D. N. and Jones, C. L., “Status Report: Practical Software Measurement”, in the 3rd International Conference on Quality Software (QSIC’03) proceedings, pp. 315-320, 2003. [10] Baddoo, N. and Hall, T., “De-motivators for software process improvement: an analysis of practitioners’ views”, The Journal of Systems and Software, vol. 66, no. 1, 2003, pp. 23-33. [11] Rainer, A. and Hall, T., “A quantitative and qualitative analysis of factors affecting software processes”, Journal of Systems and Software, vol. 66, no. 1, 2003, pp. 7-21. [12] Dybå, T., “An Instrument for Measuring the Key Factors of Success in Software Process Improvement”, Empirical Software Engineering, vol. 5, no. 4, 2000, pp. 357-390. [13] Hall, T., Baddoo, N. and Wilson, D. N., “Measurement in Software Process Improvement Programmes: An Empirical Study”, in the 10th International Workshop on Software Measurement (IWSM 2000) proceedings, pp. 73-83, 2000. [14] McGarry, J., Card, D., Jones, C., Layman, B., Clark, E., Dean, J. and Hall, F., “Practical Software Measurement. Objective Information for Decision Makers”, Addison-Wesley, 2002. [15] Zahran, S., “Software Process Improvement: Practical Guidelines for Business Success”, Addison- Wesley, 1997. [16] Briand, L. C., Differding, C. M. and Rombach, H. D., “Practical Guidelines for Measurement-Based Process Improvement”, in Software Process – Improvement and Practice, vol. 2, no. 4, 1996, pp. 253- 280. 21
  • 32. Proceedings 6th Software Measurement European Forum, Rome 2009 [17] Card, D. N., “Integrating Practical Software Measurement and the Balanced Scorecard”, in the 27th annual international computer software and applications conference (COMPSAC’03) proceedings, pp. 362-367, 2003. [18] Dybå, T., “Factors of Software Process Improvement Success in Small and Large Organisations: An Empirical Study in the Scandinavian Context”, in the 9th European Software Engineering Conference (ESEC´03) proceedings, pp. 148-157, 2003. [19] van Solingen, R. and Berghout, E., “The Goal/Question/Metric Method: A Practical Guide for Quality Improvement of Software Development”, McGraw-Hill, 1999. [20] FiSMA, Finnish Software Measurement Association, 2008, http://www.fisma.fi/eng/index.htm. [21] TUT, Tampere University of Technology, 2008, http://www.tut.fi/ [22] UJ, University of Joensuu, 2008, http://www.joensuu.fi/englishindex.html. [23] Kitchenham, B., Pfleeger, S. L., Pickard, L., Jones, P., Hoaglin, D., El Emam, K. and Rosenberg, J., “Preliminary Guidelines for Empirical Research in Software Engineering”, IEEE Transactions on Software Engineering, vol. 28, no. 8, 2002, pp. 721-733. [24] Curtis, S., Gesler, W., Smith, G. and Washburn, S., “Approaches to sampling and case selection in qualitative research: examples in geography of health”, Social Science & Medicine, vol. 50, issue 7-8, 2000, pp. 1001-1014. [25] Soini, J., “Measurement in the software process: How practice meets theory”, Doctoral thesis, ISBN 978-952-15-2060-0, Tampereen teknillinen yliopisto 2008, Publication 767, 2008. [26] Brown, J. S. and Duguid, P., “Balancing Act: How to capture Knowledge without Killing It”, Harvard Business Review, vol.78, no. 3, 2000, pp. 73-80. [27] Huysman M. and de Wit, D., “Knowledge sharing in practice”, Kluwer Academic Publishers, 2002. [28] Swan J., Newell, S., Scarborough, H. and Hislop, D., “Knowledge management and innovation: networks and networking”, Journal of Knowledge Management, vol. 3, no. 4, 1999, pp. 262-275. [29] Räsänen, P., Anttila, A. H. and Melin H., ”Tutkimusmenetelmien pyörteissä”, WS Bookwell Oy, 2005. [30] Schneidewind, N., “Knowledge requirements for software quality measurement”, Empirical Software Engineering, vol. 6, no. 3, 2001, pp. 201-205. [31] Baumert, J. and McWhinney, M., “Software Measures and the Capability Maturity Model, Software Engineering Institute Technical Report, CMU/SEI-92-TR-25, ESC-TR-92-0, 1992. [32] Ravichandran, T. and Rai, A., “Structural Analysis of the Impact of Knowledge Creation and Knowledge Embedding on Software Process Capability”, IEEE Transactions on Engineering Management, vol. 50, no. 3, 2003, pp. 270-284. [33] Nowak, M. J. and Grantham, C. E., “The virtual incubator: managing human capital in the software industry, Research Policy, vol 29, no. 2, 2000, pp. 125-134. [34] O’Regan, P., O’Donnell, D., Kennedy, T., Bontis, N. and Cleary, P., “Perceptions of intellectual capital: Irish evidence”, Journal of Human Resource Costing and Accounting, vol. 6, no. 2, 2001, pp. 29-38. [35] Drucker, P., “Knowledge-Worker Productivity. The Biggest Challenge”, California Management Review, vol. 41, no. 2, 1999, pp. 79-94. [36] Ould, M. A., “Software Quality Improvement Through Process Assessment – A View from the UK”, IEEE Colloquium on Software Quality, pp. 1-8, 1992. [37] Soini, J., Tenhunen, V. and Mäkinen, T., “Managing and Processing Knowledge Transfer between Software Organisations: A Case Study”, in the International Conference on Management of Engineering and Technology (PICMET’07) proceedings, pp. 1108-1113, 2007. [38] Kald, M. and Nilsson, F., “Performance Measurement at Nordic Companies”, European Management Journal, vol. 18, no.1, 2000, pp. 113-127. [39] Gibbs, W. W., “Software’s Chronic Crisis”, Scientific American, Sept 94, vol. 271, no. 3, 1994, pp. 86- 95. [40] Florac, W. A. and Carleton, A. D., “Measuring the Software Process: Statistical Process Control for Software Process Improvement”, Addison-Wesley Longman, Inc., 1999. [41] van Veenendaal E. and McMullan, J., “Achieving Software Product Quality”, UTN Publishers, 1997. [42] Hall, T. and Fenton, N., “Implementing Effective Software Metrics Programs”, IEEE Software, vol. 14, no. 2, 1997, pp. 55-65. [43] Simmons, D. B., “A Win-Win Metric Based Software Management Approach”, IEEE Transactions on Engineering Management, vol. 39, no. 1, 1992, pp. 32-41. [44] Andersen, O., “The Use of Software Engineering Data in Support of Project Management”, Software Engineering Journal, vol. 5, issue 6, 1990, pp. 350-356. 22
  • 33. Proceedings 6th Software Measurement European Forum, Rome 2009 [45] Varkoi, T. and Mäkinen, T., “Software Process Improvement Network in the Satakunta Region – SataSPIN”, in the European Software Process Improvement Conference (EuroSPI’99) proceedings, 1999. [46] Grady, R. B. and Caswell, D. L., “Software Metrics: Establishing a Company-Wide Program”. Prentice- Hall, 1987. [47] Kan, S. H., “Metrics and Models in Software Quality Engineering”, 2nd Edition, Addison-Wesley, 2005. [48] Mendonca, M. G. and Basili, V. R., “Validation of an Approach for Improving Existing Measurement Frameworks”, IEEE Transactions on Software Engineering, vol. 26, no. 6, 2000, pp. 484-499. [49] Eskola, J. and Suoranta, J. ”Johdatus laadulliseen tutkimukseen”. 7. painos. Vastapaino, Tampere, 2005. [50] Lukka, K., Kasanen, E., “The problem of generalizability: anecdotes and evidence in accounting research”, Accounting, Auditing and Accountability Journal, vol. 8, no. 5, 1995, pp. 71-90. [51] Briand, L., El Enam, K. and Morasca, S., “Theoretical and Empirical Validation of Software Product Measures”, ISERN Technical report 95-03, 1995. 23
  • 34. Proceedings 6th Software Measurement European Forum, Rome 2009 SMEF 2009 24
  • 35. Proceedings 6th Software Measurement European Forum, Rome 2009 IFPUG function points or COSMIC function points? Gianfranco Lanza Abstract The choice of the more suitable metric to obtain the functional measure of a software application is not an easy task as it could appear. Nowadays a software application is often composed by many modules, each of them with its own characteristics, its own programming languages and so on. In some cases one metric could appear more suitable to measure one particular piece of software than another, so in the process of measuring a whole application it’s possible to apply different metrics: obviously what we can’t do is to add one measure to the other (we can’t add inch with cm!). In our organisation we are starting to use COSMIC function points to measure part of the application and to use IFPUG function points to measure other parts. One of the possible problems is: “when the management ask us only one measure of the functional dimension of the application what can we do?” It’s possible to obtain only one measure using the conversion factors from COSMIC to IFPUG but it would be better to maintain two distinct measures. We are starting to compare the two metrics in different environments: in some cases there is a significant difference in the functional measure. Using a metric is mainly a matter of culture. It’s important to know what a metric can give us and what not, but above all, what is our goal. If, for example, our goal is to obtain a prevision of effort and costs we have to know the productivity of each metric in the different environments: so we have to collect data! This paper illustrates how we can use both the metrics in our application. 1. Introduction Every day we have to do with metrics to satisfy our needs. For example how many times a day do we look at our watch to know what time is it? We choose the right metric to satisfy our goal. If the goal is to know what time is it, we use the metric “time”. If the goal is to weigh some fruits we use the metric that give us the weigh. Sometimes the metrics give us measures that represent the same concept but in different measure unit (i.e. inch and cm for length). The metrics were born to measure and we have to measure to know. Every metric gives us an information, the functional metrics give us information about the dimension of our software products. In every moment of our life we have to use many metrics; in the world of the applications sizing is it sufficient to use one single metric? Can one single metric be sufficient to answer all our questions, all our needs? Today, the architecture of our software applications is complex and full of software components, each one with its own characteristics, programming languages and so on. It’s absolutely normal that we can use different metrics according to the nature of the software component that we have to measure. 2. User’s point of view The dimensional measure of a software application is influenced by the choice of the user; the user’s point of view is very important to determine our measure. The choice of the user depends also by the goal that we have. 25
  • 36. Proceedings 6th Software Measurement European Forum, Rome 2009 If we have to measure our software application because our client wants to know the functional dimension of the software he’ll buy, the user is our client. If we have to measure the software application to have information about the effort to develop the application, the user could be different. In a client server architecture we should like to know the functional dimension of the client component and the functional dimension of the server component, as they were two different applications, we could have two different borders, simply because we could have different data about productivity for the client component and for the server component. If our application is built with business services, we could have to measure separately these business services for knowing the effort of developing them, if some of these business services are already done, we would like to know how much reuse we will have. In an application we can have different user’s point of view that give us different measures; the important thing is that we have not to mix them! 3. Elementary Process To determine the functional dimension of an application is necessary to identify the elementary processes, independently by the metric we use. The elementary process is the smallest unit of activity, which satisfies all of the following: • Is meaningful to the user. • Constitutes a complete transaction. • Is self contained. • Leaves the business of the application being counted in a consistent state.[1] The elementary process depends by the user’s point of view and as at the end of the process the system has to be in a consistent state, it can’t be divided into sub processes. In an application, if we change the user’s point of view, we can have different elementary processes. 4. Batch Processes In some cases a batch process is a stream of processes, each of them linked together to constitute a unique elementary process, whichever user’s point of view we have. If something of wrong happens during the execution of one process, the whole batch process has to be restored. In the figure 1 is represented the schema of this type of batch. Start End Process 1 Process 2 Process n Figure 1: Batch process. 4.1. Batch processes using IFPUG function points If we use the IFPUG function points, we have to consider one unique elementary process for one batch process (in fact we can’t divide the elementary process in different elementary processes because the system wouldn’t be in a consistent state at the end of each process of the stream; this independently by the user’s point of view). Then we have to discover the primary intent of this process, if it is to maintain some logical files or change the system behaviour, or if it is to expose data outside the boundary of our application. In the first case we have an EI, otherwise an EO or EQ. 26
  • 37. Proceedings 6th Software Measurement European Forum, Rome 2009 To establish the complexity of our IFPUG functions we have to identify the FTR (File Type Referenced) and the DET (Data Element Type); we could have from 3 to 7 IFPUG function points, according to the complexity and the type of the transactional function. We have to add this function points to the number of the data functions (ILF or EIF) function points but very often these batch processes are inside the boundary of a Client- Server or WEB application, the data functions that they reference are the same counted in the Client-Server or Web application; so being inside the same boundary we can’t count twice our data functions! We have also to observe that in an application of great dimension we can have a great number (more than 50) of these batch processes. At the end of the process, in any case, we have a number of function points for each of this process that are very similar, probably with a difference of three, four function points. Is this measure useful to us? Is it sufficient significant to apply, for example, a model to obtain the cost and the effort of the batch processes developing? Probably the answer is not! 4.2. Batch processes using COSMIC function points If we use COSMIC function points we have also to identify the elementary process by the user’s point of view. In this case the elementary process is the same of that identified in the IFPUG function points process. In COSMIC function points there aren’t data functions but there are objects of interest and data movements, each data movement has one object of interest.[2] There are four type of data movements: Entry, Exit, Read and Write. In a bath process with a stream of processes linked together to constitute a single elementary process there are a multitude of data movements. In the figure 2 is illustrated an example. The weigh of each data movement is one single function point. There is no limit to the numbers of data movements in a elementary process (at least one entry and one of the others), so we can have functional processes with a strong difference in function points, according to the numbers of data movements. Is this measure useful to us? Is it sufficient significant to apply, for example, a model to obtain the cost and the effort of the batch process developing? Probably the answer is yes! Entry 1 Read 1 Entry m Read k Batch process Update1 Exit 1 Update j Exit n Figure 2: Data Movements. 4.3. A real case In the Table 1 is represented the result in IFPUG function points and in Table 2 of COSMIC function points of the functional measure of the batch processes of a business application. The number of IFPUG function points of each batch process is obtained by the sum of the relative transactional functions with the portion of the Data functions weigh (the entire Data Function weigh is of 228 FP / 32 Batch Processes = 7,125 FP). 27
  • 38. Proceedings 6th Software Measurement European Forum, Rome 2009 The number of COSMIC Function Point is obtained by the number of data movements of each batch process. Table 1: The IFPUG function points counting result. Name Function FP Print and Update BATCH CTTRIS000-002 EOH 7 CTTRIS000-003 EOH 7 CTTRIS000-008 EOH 7 CTTRIS500-001 EOH 7 CTTRIS500-003 EOA 5 CTTRIS600-001 EOH 7 CTTRTP000-002 EOH 7 CTTRTP000-003 EOH 7 CTTRTP000-005 EOH 7 CTTRTP000-008 EOH 7 CTTRTP000-009 EOH 7 CTTRTP000-010 EOH 7 CTTRTP000-011 EOH 7 CTTRTP000-012 EOH 7 CTTRTP000-013 EOH 7 CTTRTP000-014 EOH 7 CTTRTR500-001 EOH 7 CTTRTR600-001 EOH 7 CTTRTR600-002 EOH 7 CTTRTR600-004 EOH 7 CTTRTR800-001 EOH 7 CTTRTR800-002 EOH 7 CTTRTS000-001 EOH 7 CTTRTS700-001 EOH 7 CTTRTS700-002 EOH 7 CTTRTS700-008 EOH 7 CTTRTU000-001 EOH 7 CTTRTU000-002 EOH 7 CTTRTU000-003 EOH 7 CTTRTU000-008 EOH 7 CTTRTS700-001 EOH 7 CTTRTS700-002 EOH 7 Total = 222 28