Software Metrics

8,266
-1

Published on

An introduction to Software Metrics

4 Comments
16 Likes
Statistics
Notes
No Downloads
Views
Total Views
8,266
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
4
Likes
16
Embeds 0
No embeds

No notes for slide

Software Metrics

  1. 1. Software Metrics Massimo Felici Massimo Felici - Software Metrics, 1999 1
  2. 2. OverviewPart I: Introduction to Software MetricsPart II: Introduction to SQUIDPart III: Case StudiesPart IV: SQUID Tool Demo Massimo Felici - Software Metrics, 1999 2
  3. 3. Part IIntroduction to Software Metrics Massimo Felici - Software Metrics, 1999 3
  4. 4. Introduction to Software Metrics Overview• Measurement: what is it and why do it?• The basic of measurement• Software metrics data collection• Analysing software measurement data• Measuring internal product attributes: size• Measuring internal product attribute: structure• Measuring external product attribute• Software reliability: measurement and prediction Massimo Felici - Software Metrics, 1999 4
  5. 5. Measurement: what is it and why do it? “Measurement is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules.” Massimo Felici - Software Metrics, 1999 5
  6. 6. Measurement Examples height speedweight Massimo Felici - Software Metrics, 1999 6
  7. 7. Measurement in Software Engineering• Neglect of measurement in software engineering (e.g., unclear objectives, unclear costs, prediction vs production, not evidence for new technologies)• Objectives for software measurement (e.g., manager and engineer viewpoints)• Measurement for understanding, control and improvement Massimo Felici - Software Metrics, 1999 7
  8. 8. Diverse Quality Viewpoints I want it easy to understand and cheap to modify I want low risk and an easy demonstration Massimo Felici - Software Metrics, 1999 8
  9. 9. Diverse Quality Needs Different Systems or Different Parts Management of Same System Statistics Food Don’t Apply Critical Qualities to Total Systems GuidanceEngine Control Seat Bookings Massimo Felici - Software Metrics, 1999 9
  10. 10. User View• Software that meets user’s needs• Based around users tasks – Sometimes very context dependent• Supported by – Reliability modelling – Performance modelling – Usability laboratories Massimo Felici - Software Metrics, 1999 10
  11. 11. The Scope of Software Metrics• Cost and effort estimation• Productivity models and measures• Data collection• Quality models and measures (ISO 9126)• Reliability models• Performance evaluation and models• Structural and complexity metrics• Management by metrics• Evaluation of methods and tools• Capability maturity assessment (CMM) Massimo Felici - Software Metrics, 1999 11
  12. 12. The Basics of Measurement• Empirical relations (that is, mappings from the empirical real world to a formal mathematical world)• The rules of mapping “measurement is defined as a mapping from the empirical world to the formal relational world. Consequently, a measure is the number or symbol assigned to an entity by this mapping in order to characterise an attribute”• The representation condition of measurement: “a mapping has to preserve an empirical relation by a numerical relation” Massimo Felici - Software Metrics, 1999 12
  13. 13. An Example of Measure Mr. A Mr. B Mr. CEmpirical relation:A taller than B taller than CMathematical relation:height of A greater than height of B greater than height of Cheight_A=1,7m > height_B=1,6m > height_C=1,5m Massimo Felici - Software Metrics, 1999 13
  14. 14. Measurement and Models• Defining attributes (e.g., size, complexity, etc.)• Direct and indirect measurement• Measurement for prediction “A prediction system consists of a mathematical model together with a set of prediction procedures for determining unknown parameters and interpreting results” (Littewood, 1988) Massimo Felici - Software Metrics, 1999 14
  15. 15. Examples of Common Indirect MeasuresProgrammer productivity LOC produced person months of effortModule defect density number of defects module sizeDefect detection efficiency number of defects detected total number of defectsRequirements stability number of initial requiremen ts total number of requiremen tsTest effectiveness ratio number of items covered total number of itemsSystem spoilage effort spent fixing faults total project effort Massimo Felici - Software Metrics, 1999 15
  16. 16. Measurement Scales and Scale Types • Nominal (e.g., labelling, classifying entities) • Ordinal (e.g., level of severity) • Interval (e.g., temperature in degree Celsius) • Ratio (e.g., length of physical objects) • Absolute (e.g., counting entities) Massimo Felici - Software Metrics, 1999 16
  17. 17. An Example of Interval Scale• Measure air temperature on a Fahrenheit or Celsius scale• Transform Celsius to Fahrenheit by using the transformation F 9 5 C 32• Compare intervals between the two scales• Not compare ratio of the two scales (e.g., by two-thirds, 50%, etc.) Massimo Felici - Software Metrics, 1999 17
  18. 18. Meaningfulness in Measurement“A statement involving measurement ismeaningful if its truth value is invariant totransformations of allowable scales.” Massimo Felici - Software Metrics, 1999 18
  19. 19. Software Metrics Data CollectionWhat is good data?Correctness: Are they correct?Accuracy: Are they accurate?Precision: Are they appropriately precise?Consistent: Are they consistent?Timed: Are they associated with a particular activity or time period?Repeatable: Can they be replicated? Massimo Felici - Software Metrics, 1999 19
  20. 20. How to define dataDerived Attributes Value Indirect Measurement Refined Data Direct Raw Data Measurement Process Product Resource Massimo Felici - Software Metrics, 1999 20
  21. 21. Terminology error• A fault occurs when a human error results in a mistake in some software product. That is, the fault is the encoding of the human error. fault• A failure is the departure of a system from its required behaviour.• In order to remove faults we make changes. failure One fault can result in multiple changes to one product (such as changing several section of a piece of code) or multiple changes to multiple products (such as change to requirements, design and code). changes Massimo Felici - Software Metrics, 1999 21
  22. 22. What do we need to record of a problem? Location: Where did the problem occur? Timing: When did it occur? Symptom: What was observed? End result: Which consequence resulted? Mechanism: How did it occur? Cause: Why did it occur? Severity: How much was the user affected? Cost: How much did it cost? Massimo Felici - Software Metrics, 1999 22
  23. 23. NoticeLocation • This is only a generic schema for the information relevant to all type of incidents.Timing • We can collect information about errors, faults,Symptom failures and changes.End result • Depending on the particular measurementMechanism objectives (and the current sophistication of the metric program), it may not be necessary toCause collect all the information for each incident.Severity • You cannot use the same form to capture fault andCost failure data Massimo Felici - Software Metrics, 1999 23
  24. 24. What do we need else? • Orthogonal classification • Levels of granularity • Data collection forms • Database Massimo Felici - Software Metrics, 1999 24
  25. 25. Analysing Software-Measurement Data • The nature of the data: sampling, population and data distribution • Purpose of the experiment: confirming a theory or exploring a relationship • Design consideration Massimo Felici - Software Metrics, 1999 25
  26. 26. Examples of Simple Analysis Techniques • Box plots • Scatter plots • Control charts • Bar graphs • Correlation • Fitting Massimo Felici - Software Metrics, 1999 26
  27. 27. Box PlotsLower Median Upper Graphical Tail Tail Outliners representation of the spread of data x xLower Quartile Upper Quartile 0 20 40 60 80 100 120 140 160 180 200 220 Control Flow Paths Median: middle-ranked item in the data set Upper Quartile: median of the values that are more than Median Lower Quartile: median of the values that are less than Median Box Length = Upper Quartile - Lower Quartile Upper Tail=Upper Quartile + 1.5 Box Length Lower Tail=Lower Quartile - 1.5 Box Length Massimo Felici - Software Metrics, 1999 27
  28. 28. Scatter Plot 18 16 14Number of faults 12 10 8 6 4 2 0 0 100 200 300 400 500 600 LOC Scatter plots are used to represent data for which two measures are given for each entity. Massimo Felici - Software Metrics, 1999 28
  29. 29. Control Charts • Help you to see when your data are within acceptable bounds. • By watching the data trends over time, you can decide whether to prevent problems before they occur.Massimo Felici - Software Metrics, 1999 29
  30. 30. Bar GraphsMassimo Felici - Software Metrics, 1999 30
  31. 31. Measuring Internal Product Attributes: Size length functionality complexity Massimo Felici - Software Metrics, 1999 31
  32. 32. Measuring Internal Product Attributes: SizeThe combination ofdifferent measures of thesame attribute can giveadditional information height_A height_B weight_A weight_B Massimo Felici - Software Metrics, 1999 32
  33. 33. Measuring Length: Line Of Code (LOC) • Definition of Line Of Code • Programming language • Non-textual or external code • Specifications and designs • Predicting length • Reuse of code Massimo Felici - Software Metrics, 1999 33
  34. 34. Measuring Size: Functionality• Function points (external inputs, external outputs, external inquiries, external files, internal files)• COCOMO 2.0 approach• DeMarco’s approach Massimo Felici - Software Metrics, 1999 34
  35. 35. Measuring Size: Complexity• Problem with indirect measures (e.g., elapsed time)• Complexity of a problem (e.g., algorithm complexity) Massimo Felici - Software Metrics, 1999 35
  36. 36. Measuring Internal Product Attributes: Structure • The control flow addresses the sequence in which instructions are executed in a program. • Data flow follows the trail of data item as it is created or handled by a program. • Data structure if the organisation of data itself, independent of the program. Massimo Felici - Software Metrics, 1999 36
  37. 37. Flowgraph Model of StructureA flowgraph is a directed graph in which two nodes,the start node and the stop node, obey specialproperties. The stop node has not leaving arcs, andevery node lies on some path from the start node to thestop node. Massimo Felici - Software Metrics, 1999 37
  38. 38. Common Flowgraphs from Program Structure Models X1 X2 X3 Xn-1 Xn X1; X2; X3; …; Xn-1; Xn; A if A then X A if A then X t t f else Y X f X Y A case A of A while A do X X a1 a2 an a1: X1 a2: X2 f fX1 X2 ··· Xn … t t an: Xn X repeat X until A A Massimo Felici - Software Metrics, 1999 38
  39. 39. Sequencing and Nesting• Let F1 and F2 be two flowgraphs. Then the sequence of F1 and F2 is the flowgraph formed by merging the stop node of F1 with the start node of F2.• Let F1 and F2 be two flowgraphs. Suppose F1 has a procedure node x. Then the nesting of F2 onto F1 at x is the flowgraph formed from F1 by replacing the arc from x with the whole of F2 . Massimo Felici - Software Metrics, 1999 39
  40. 40. Prime FlowgraphsPrime flowgraphs are flowgraphsthat cannot be decomposed non-trivially by sequencing and nesting. Massimo Felici - Software Metrics, 1999 40
  41. 41. The Generalised Notion of Structuredness Let S be a family of prime flowgraphs. A family ofgraphs is S-structured (or, more simply, that the membersare S-graphs) if it satisfies the following recursive rules:• Each member of S is S-structured.• If F and F’ are S-structured graphs, then so are - the sequence of F and F’ - the nesting of F’ on to F (whenever nesting of F’ onto F is defined).• No flowgraph is an S-structured graph unless it can beshown to be generated by a finite number of applications ofthe above step. Massimo Felici - Software Metrics, 1999 41
  42. 42. Prime DecompositionPrime decomposition theorem:Every flowgraph has a uniquedecomposition into a hierarchy ofprimes. (Fenton and Whitty, 1986) Massimo Felici - Software Metrics, 1999 42
  43. 43. Example of Hierarchical MeasuresNumber of nodes measure nM1 : n ( F ) number of node in F for each prime F kM 2 : n F1 ;; Fk n Fi k 1 i 1 kM 3 : n F F1 ; Fk nF n Fi 2k for each prime F i 1 Massimo Felici - Software Metrics, 1999 43
  44. 44. Hierarchical MeasuresNumber of edges measure eM1 : e F number of edges in F for each prime F kM 2 : e F1 ;; Fk n Fi i 1 kM 3 : e F F1 ;; Fk eF e Fi k for each prime F i 1 Massimo Felici - Software Metrics, 1999 44
  45. 45. Another Measure of ComplexityMcCabe’s cyclomatic complexity measure vF e n 2 Massimo Felici - Software Metrics, 1999 45
  46. 46. Test Strategies • Black-box (or closed-box) testing, in which the test cases are derived from the specification or requirements without reference to the code itself or its structure. • White-box (or open-box) testing, in which the test cases are selected based on knowledge of the internal program structure.Massimo Felici - Software Metrics, 1999 46
  47. 47. Test Coverage StrategiesStatement coverage, in which every programstatement is executed al least once.Branch (or edge) coverage, in which everyedge lies on at least one path of the test cases.Path coverage, in which every possibleprogram path is executed at least once. Massimo Felici - Software Metrics, 1999 47
  48. 48. Test MeasuresLet T be a testing strategies that requires us to cover a classof objects (e.g., paths, simple paths, linearly independentpaths, edges, statements, etc.). Two of the most importanttest measures areMinimum number of test cases = minimal number ofpaths required to exercise all the objects T at least once.Test Effectiveness Ratio (TER) number of T objects exercised at least once TER total number of T objects Massimo Felici - Software Metrics, 1999 48
  49. 49. Modularity and Information Flow Attributes “A module is a contiguous sequence of program statements, bounded by boundary elements, having an aggregate identifier.” (Yourdon and Constantine, 1979) Massimo Felici - Software Metrics, 1999 49
  50. 50. Module Call-Graph Main scoresscoresRead_Scores Average scores average average Calc_Av Print_Av Massimo Felici - Software Metrics, 1999 50
  51. 51. Coupling“Coupling is the degree of interdependence betweenmodules.” (Yourdon and Constantine, 1979) Massimo Felici - Software Metrics, 1999 51
  52. 52. An Ordinal Classification for CouplingNo coupling relation R0: x and y have no communication.Data coupling relation R1: x and y communicate byparameters.Stamp coupling relation R2: x and y accept the samerecord type as a parameter.Control coupling relation R3: x passes a parameter to ywith the intention of controlling its behaviour (e.g., flag).Common coupling relation R4: x and y refer to the sameglobal data.Content coupling relation R5: x refers to the inside of y. Massimo Felici - Software Metrics, 1999 52
  53. 53. A Measure of Coupling nc x, y i n 1i worst coupling relation Ri between x and yn number of interconne ctions between x and y (Fenton and Melton, 1990) Massimo Felici - Software Metrics, 1999 53
  54. 54. Cohesion“The cohesion of a module is the extentto which its individual components areneeded to perform the same task.” Massimo Felici - Software Metrics, 1999 54
  55. 55. An Ordinal Scale for CohesionFunctional: the module performs a single well-defined function.Sequential: the module performs more than one function, but theyoccur in an order prescribed by the specification.Communicational: the module performs multiple functions, but allon the same body of data.Procedural: the module performs more than one function, and theyare related only to a general procedure affected by the software.Temporal: the module performs more than one function, and theyare related only by the fact that they must occur within the sametimespan.Logical: the module performs more than one function, and they arerelated only logically.Coincidental: the module performs more than one function, andthey are unrelated. Massimo Felici - Software Metrics, 1999 55
  56. 56. A Simple Measure of Cohesion number of modules having functional cohesionCohesion ratio total number of modules Massimo Felici - Software Metrics, 1999 56
  57. 57. Information FlowLocal direct flow exists if either a module invokes a secondmodule and passes information to it or the invoked modulereturns a result to the caller.Local indirect flow exists if the invoked module returnsinformation that is subsequently passed to a second invokedmodule.Global flow exists if information flows from one module toanother via a global data structure. Massimo Felici - Software Metrics, 1999 57
  58. 58. Measures of Information Flowfan-in(M) is the number of local flows that terminate at M, plus thenumber of data structures from which information is retrieved by M.fan-out(M) is the number of local flows that emanate from M, plus thenumber of data structures that are updated by M.Information flow complexity(M)=length(M)*(fan-in(M)*fan-out(M))2,Henry and KafuraShepperd complexity(M)=(fan-in(M)*fan-out(M))2, Shepperd Massimo Felici - Software Metrics, 1999 58
  59. 59. Object-Oriented MetricsMetric 1: weighted methods per class (WMC)Metric 2: depth of inheritance tree (DIT)Metric 3: number of children (NOC)Metric 4: coupling between object classes (CBO)Metric 5: response for class (RFC)Metric 6: lack of cohesion metric (LCOM) Massimo Felici - Software Metrics, 1999 59
  60. 60. Data Structure MetricsThere have been few attempts to define measures of actualdata items and their structure. Two examples:Number of distinct operand = number of variables + number ofunique constants + number of labels Database size in bytes or characteresD/P Program size in delivered source instructio n (DSI) (Boehm) Massimo Felici - Software Metrics, 1999 60
  61. 61. Measuring External Product Attributes• What is quality for the final product?• What is quality for the end user?• How can external quality be measured?• What is the relation between external and internal attributes?• Is quality a general concept? Massimo Felici - Software Metrics, 1999 61
  62. 62. Modeling Software Quality: ISO 9126 Suitability Functionality, a set of attributes that bear on the existence ofFunctionality Accuracy a set of functions and their specified properties. The Interoperability functions are those that satisfy stated or implied needs. Security Reliability Maturity Reliability, a set of attributes that bear on the capability of Fault tolerance software to maintain its level of performance under stated Recoverability conditions for a stated period of time. Understandability Usability Learnability Usability, a set of attributes that bear on the effort needed Operability for use, and on the individual assessment of such use, by a Efficiency Time behaviour stated or implied set of users. Resouce behaviour Analysability Efficiency, a set of attributes that bear on the relationshipMaintainability Changeability between the level of performance of the software and the Stability amount of resources used, under stated conditions. Testability Adaptability Maintainability, a set of attributes that bear on the effort Portability Installability needed to make specified modifications. Portability, a set of attributes that bear on the ability of Conformance Replaceability software to be transferred from one environment to another. Massimo Felici - Software Metrics, 1999 62
  63. 63. Measuring Aspects of QualityDefects-based quality measures Usability measuresMaintainability measures Massimo Felici - Software Metrics, 1999 63
  64. 64. Defect Density Measure number of known defectsdefect density product size Massimo Felici - Software Metrics, 1999 64
  65. 65. Defect Density Warnings• definition of defect• distinction between defect density and defect rate• definition of software size• interpretation of defect density measures (defect finding)• no faults free• seriousness of faults• variability of users Massimo Felici - Software Metrics, 1999 65
  66. 66. Usability Measures• difficult to define usability• internal measures of usability• the external view of usability is environment dependent Massimo Felici - Software Metrics, 1999 66
  67. 67. Maintainability Measures• corrective maintainability• adaptive maintainability• preventive maintainability• perfective maintainability Massimo Felici - Software Metrics, 1999 67
  68. 68. External View of Maintainability Mean Time To Repair (MTTR) • problem recognition time • administrative delay time • maintenance tools collection time • problem analysis time • change specification time • changing time (including testing and review) Massimo Felici - Software Metrics, 1999 68
  69. 69. Other Environmental-Dependent Maintainability Measures• ratio of total change implementation time to total number of changes implemented• number of unresolved problems• time spent on unresolved problems• percentage of changes that introduce new faults• number of modules modified to implement a change Massimo Felici - Software Metrics, 1999 69
  70. 70. Software ReliabilityMeasurement and Prediction • What is software reliability? • Software reliability vs. hardware reliability Massimo Felici - Software Metrics, 1999 70
  71. 71. The Basic Problem of Reliability Theory • The basic problem of reliability theory is to predict when a system will eventually fail. • Hardware reliability concerns normally with component failures due to physical wear (e.g., corrosion, shock, over-heating, etc.). • Such failures are probabilistic in nature, that is, we usually do not know exactly when something will fail, but we know that the product eventually will fail, so we can assign a probability that the product will fail at a particular point in time. Massimo Felici - Software Metrics, 1999 71
  72. 72. Basic Reliability TheoryProbability density function f t tDistribution function Ft f t dt 0Reliability function Rt 1 Ft Massimo Felici - Software Metrics, 1999 72
  73. 73. Basic Reliability TheoryMean Time To Failure (MTTF) ET t f t dtMean Time To Recover (MTTR)Mean Time Between Failure (MTBF) MTBF = MTTF + MTTR Massimo Felici - Software Metrics, 1999 73
  74. 74. The Software Reliability Problem Software Reliability vs. Hardware Reliability• There are many reasons for software to fail, but none involves wear and tear.• Usually, software fails because of a design problem.• Other failures occur when the code is written, or when changes (e.g., changed requirements, revised design, corrections of existing problems, etc.) are introduced to a working system.• Ideally, we want our changes implemented without introducing new faults, so that by fixing a known problem we increase the overall reliability of the system. Massimo Felici - Software Metrics, 1999 74
  75. 75. The Software Reliability Problem Software Reliability vs. Hardware Reliability• When hardware fails, the problem is fixed by replacing the failed component with a new or repaired one, so that the system is restored to its previous reliability. Rather than growing, the reliability is simply maintained.• The key distinction between software reliability and hardware reliability is the difference between intellectual failure (usually due to design faults) and physical failure.• Although hardware reliability can also suffer from design faults, the extensive theory of hardware reliability does not deal with them. Massimo Felici - Software Metrics, 1999 75
  76. 76. Parametric Reliability Growth Model • Geometric • Jelinski-Moranda • Littlewood-Verrall • Musa basic Massimo Felici - Software Metrics, 1999 76
  77. 77. Failure Data for Reliability Modeling (Musa) Massimo Felici - Software Metrics, 1999 77
  78. 78. Comparison of Several Reliability Models Massimo Felici - Software Metrics, 1999 78
  79. 79. Choosing the Best Model• Uncertainty about the operational environment• Uncertainty about the effect of fault removal• The behaviour of models depends greatly on the data to witch they are applied. In fact, the accuracy of models also varies from one data set to another. Sometimes no parametric prediction system can produce reasonably accurate results. Massimo Felici - Software Metrics, 1999 79
  80. 80. Resource Measurement Productivity Teams Tools Massimo Felici - Software Metrics, 1999 80
  81. 81. Standards and Approaches• ISO 9126• IEEE 912• Capability Maturity Model (CMM) Software Engineering Institute (SEI)• SPICE European Software Institute (ESI)• Goal Question Metric (GQM) Victor Basili• SQUID Massimo Felici - Software Metrics, 1999 81
  82. 82. Goal Question Metric (GQM)The Goal Question Metric approach provides aframework involving three main steps:1. List the major goals of the development ormaintenance project.2. Derive from each goal the questions that must beanswered to determine if the goals are being met.3 Decide what must be measured in order to be ableto answer the questions adequately. Massimo Felici - Software Metrics, 1999 82
  83. 83. Capability Maturity Model (CMM) Level 5: Optimizing Level 4: Managed Level 3: Define Level 2: Level 1: Initial Repeatable No attempt to standardise processLevel 1: Level 2: Repeatable Initial Disciplined process Level 3: Defined Standard, consistent process Level 4: Managed Predictable process Level 5: Optimising Continuously improving process Massimo Felici - Software Metrics, 1999 83
  84. 84. ReferencesFenton, N.E., Pfleeger, S.L.: Software Metrics: A Rigorous & Practical Approach,Second Edition. International Thomson Computer Press, 1996.ISO/IEC 9126, Information technology - Software quality characteristics and metrics.IEEE Std 982.1-1988, IEEE Standard Dictionary of Measures to Produce ReliableSoftware. IEEE, 1988.IEEE Std 982.2-1988, IEEE Guide for the Use of IEEE Standard Dictionary ofMeasures to Produce Reliable Software. IEEE, 1988.Lyu, M. (Ed.): Handbook of Software Reliability Engineering, IEEE ComputerSociety Press, 1996.Musa, J., Iannino, A., Okomuto, K.: Software Reliability: Measurement, Predictionand Application. McGraw-Hill, New York, 1987.Salamon, W.J., Wallace, D.R.: NISTIR 5459, Quality Characteristics and Metrics forReusable Software. Massimo Felici - Software Metrics, 1999 84
  85. 85. Part IIIntroduction to SQUID Massimo Felici - Software Metrics, 1999 85
  86. 86. Massimo Felici - Software Metrics, 1999 86
  87. 87. Software QUality In the Development ProcessThe SQUID Project P8436 has been funded by the EuropeanCommission within the ESPRIT ProgrammeSQUID Partners:Keele University (UK), DELTA (DK), ENEA (IT), EngineeringIngegneria Informatica S.p.A. (IT), TÜV-Nord (DE) Massimo Felici - Software Metrics, 1999 87
  88. 88. SQUID Approach• Provides support for product quality specification, control and evaluation• Measurement-based• Two types of measure – Internal measures relating to software and its development process – External measures relating to operational behaviour and the support process• Problem – Software engineering research can not show a functional relationship between internal measures and external measures• SQUID View – Controlling internal quality is likely to improve product quality – Collecting and analysing data will confirm whether any relationships exist in your own organisation Massimo Felici - Software Metrics, 1999 88
  89. 89. Four Steps to Quality• Specify product quality requirements objectively: – Target values of measurable properties• Assess feasibility of quality requirements before starting – Address feasibility problems as necessary• Monitor progress towards achieving quality requirements – Set Targets for measures arising during product development – Monitor and control these measures• Evaluate final product quality by comparing target values with actuals Massimo Felici - Software Metrics, 1999 89
  90. 90. SQUID Approach• SQUID approach requires – Integration of different types of measure (internal & external) – Integration of different models (quality models & development models)• Problem – There are many different quality models • Different quality models lead to different external measures – Different organisations will have use different development models • Different development methods lead to different internal measures• SQUID toolset supports configuration – Definition of organisation specific models and measures – Analysis methods independent of specific models and measures Massimo Felici - Software Metrics, 1999 90
  91. 91. SQUID Approach• SQUID support various processes needed to manage quality – Quality Specification – Quality Planning – Quality Monitoring – Quality Evaluation• Basis of SQUID toolset functions Massimo Felici - Software Metrics, 1999 91
  92. 92. SQUID Quality Process USER QUALITY USER NEEDS IN USE LAYER QUALITY QUALITY EXTERNALSPECIFICATION EVALUATION LAYER QUALITY QUALITY INTERNAL PLANNING CONTROL LAYER Massimo Felici - Software Metrics, 1999 92
  93. 93. Quality SpecificationUser or organisational product requirements: – Mapped to appropriate elements of a quality model – Converted to required values of measurable operational properties – Checked for feasibility Massimo Felici - Software Metrics, 1999 93
  94. 94. Modeling the Development Process Development model Project Object comprises belongs to Project Object type Deliverable type Development Review point type Activity type Massimo Felici - Software Metrics, 1999 94
  95. 95. Creating a Quality Model Quality requirement leads_to specification Product behavioural requirement addresses belongs_to Quality Characteristic Quality Model refine influences Internal Quality External Quality Characteristic Characteristic indicates quantified_by quantified_byInternal attribute External Attribute Massimo Felici - Software Metrics, 1999 95
  96. 96. The SQUID View of Measurement quantifies Unit Attribute using exhibits measures Value Project Object in_accordance_with Counting Rule Massimo Felici - Software Metrics, 1999 96
  97. 97. Quality Planning• After quality specification agreed• Development process must be defined – Assumed to be one of several company standard methods• Set targets for internal measures – SQUID assumes that controlling internal measures will deliver required quality – To monitor progress internal measures should be associated with different development activities and intermediate deliverables – Target values can be based on past performance: • What internal values where observed on project that achieved similar requirements Massimo Felici - Software Metrics, 1999 97
  98. 98. Quality Monitoring• Compares actual values with target values during software production – To assess current status• Compares estimates of future quality measures with target values – To re-assess feasibility• Identifies “anomalies” – Unusual components (modules, documents, programs) – Possible problem components Massimo Felici - Software Metrics, 1999 98
  99. 99. The SQUID Toolset Architecture Conf iguration Quality Evaluation Quality Specif ication SQUID Database Quality Planning Quality Monitoring Data Collation Massimo Felici - Software Metrics, 1999 99
  100. 100. Part IIICase StudiesMassimo Felici - Software Metrics, 1999 100
  101. 101. Case Studies• A railways traffic control system for the Italian National Railways• Antarctica TELESCIENCE Project Massimo Felici - Software Metrics, 1999 101
  102. 102. A Railways Traffic Control System Project Summary Customer: Italian National Railways (FS) Project Type: Research, Prototyping Target: Developing a new Control System System Aims: Controlling Train Traffic in real-time No Safety Design Approach: User Centred DesignRailways Traffic Control Massimo Felici - Software Metrics, 1999 102System
  103. 103. User Centred Design Approach General Properties • Based on user activity • Iterative design process • User participation • Prototypes based on scenarios frequent modification • Instruments used for designing, analysing and evaluating video/record taping questionnaires interviewsRailways Traffic Control Massimo Felici - Software Metrics, 1999 103System
  104. 104. Phases of the Design Process Task analysis p r Task allocation o t o System design t y p i Job design n g EvaluationRailways Traffic Control Massimo Felici - Software Metrics, 1999 104System
  105. 105. Schema of the Control System Real System Man Machine System Monitor Operator Interface Decision Support SystemRailways Traffic Control Massimo Felici - Software Metrics, 1999 105System
  106. 106. Quality Model Quality ModelReliability Usability Functionality Efficiency Maintainability characteristic layer subcharacteristic layers attribute layer (internal/external)Railways Traffic Control Massimo Felici - Software Metrics, 1999 106System
  107. 107. Quality Requirements Each system component: - performs different tasks - has different quality requirements Component Quality Quality Subcharacteristic Requirement Characteristic Level System Monitor Reliability Very High Functionality Function completeness High Function correctness High Efficiency Medium Maintainability Corrective Maintainability High Adaptive Maintainability Very HighRailways Traffic Control Massimo Felici - Software Metrics, 1999 107System
  108. 108. Quantitative Targets Quantitative targets to monitor progress of components System Quality Quality Quality Attribute Target Component Characteristic Subcharacteristic Subsubcharacteristic System Reliability Estimated Mean time 1*10E3 hours Monitor Reliability between failures Correctness Module fault 10 faults per density module Test Accuracy Test Coverage Function = 100 % coverage Branch coverage 70% . . . . . . . . . . . . . . . . . .Railways Traffic Control Massimo Felici - Software Metrics, 1999 108System
  109. 109. Selection Criteria for Metrics • Objective • Observable • Repeatable • Predictive • Easy to collect • Valid across further projects • User Centred Design aspects (new methods and tools that should be exploited by metrics)Railways Traffic Control Massimo Felici - Software Metrics, 1999 109System
  110. 110. Critical Issues • Frequent modifications • Iterative design process • User Centred Design aspectsRailways Traffic Control Massimo Felici - Software Metrics, 1999 110System
  111. 111. Data Collection Activity • Use of the SQUID tool • Tools in order to automate the data collection activity • ReportsRailways Traffic Control Massimo Felici - Software Metrics, 1999 111System
  112. 112. Conclusion • Software Quality in User Centred Design • Interpretation of metrics within a specific context • Critical issues frequent modifications iterative design process User Centred Design aspects • Definition of new (user centred design) metricsRailways Traffic Control Massimo Felici - Software Metrics, 1999 112System
  113. 113. Antarctica Telescience Project • The Telescience Project was launched end 1993 in the framework of the Italian National Antarctic Research Programme. • The aim is the development of an Advanced Supervision and Telecontrol System to remotely perform experimental activities in Antarctica during the austral winter.Antarctica Telescience Massimo Felici - Software Metrics, 1999 113Project
  114. 114. Project DescriptionDevelopment responsibility: ENEA, Division INN RIN INFAVEstimated number of modules: 40Estimated code size: 19.000 LOCExpected man power required: 7 staff yearPersonnel assigned to the project staff: 8Experience of the staff with CASE tools: GoodExperience of the staff withquality measurement tools: NoneUse of quality control tools(other than SQUID): NoneProgramming language: CAntarctica Telescience Massimo Felici - Software Metrics, 1999 114Project
  115. 115. Antarctica Telescience Project Main Systems• A set of Remote Laboratories in Antarctica• A Supervision and Telecontrol Room in Antarctica• An Electric Power Supply & Distribution System in Antarctica• A Telecommunication System in Antarctica• A Remote Man-Machine Interface System in ItalyAntarctica Telescience Massimo Felici - Software Metrics, 1999 115Project
  116. 116. Architecture of the Advanced Supervision and Telecontrol System Remote Remote Remote Remote lab. N° 1 lab. N° 2 lab. N° 3 lab. N° n Supervision and Telecontrol Room Telecommunication SystemElectric Power Supply& Distribution System Antarctica Remote Man-Machine Italy Interface System Antarctica Telescience Massimo Felici - Software Metrics, 1999 116 Project
  117. 117. The Electric Power Supply and Distribution System • The Electric Power Supply & Distribution System (EPSDS) ensures that electricity produced by the on-site diesel generators is available to the equipment. • The experimental activities is intended to run unmanned for 8 months in Antarctica and the EPSDS must guarantee the power supply for such period without interruptions. • The EPSDS has stringent requirements. • The real-time control software is duplicated. • If the software fails catastrophically in both versions of the Electric Power Supply Subsystem, a back-up battery power can run for one hour. Antarctica Telescience Massimo Felici - Software Metrics, 1999 117 Project
  118. 118. The Remote Man-Machine Interface System • The MMI allows to perform scientific activities as well as to control and monitor the Antarctica base. • There are two types of users operators and scientists. • The Remote MMI system is based on 7 main subsystems: 1. Data acquisition 2. Information Presentation 3. Command sending 4. Simulation 5. Operator training 7. Data elaborationAntarctica Telescience Massimo Felici - Software Metrics, 1999 118Project
  119. 119. Product PortionSubproject Product Portion ComponentsElectric Power Supply PP1 Electric Power Supply subsystemDistribution system PP2 Electric Power Distribution subsystemRemote MMI system 1 PP3 Information presentation subsystem Command sending subsystem Simulation subsystemRemote MMI system 2 PP4 Mission planning support subsystem Operator training subsystem Data elaboration subsystem Data acquisition subsystem Antarctica Telescience Massimo Felici - Software Metrics, 1999 119 Project

×