This chapter discusses software estimation, measurement, and metrics. It explains that accurate size estimation is critical for determining cost, schedule, and effort but is often too low, leading to budget overruns and delays. It describes various size estimation techniques including source lines of code, function points, and feature points. It also discusses complexity metrics and the importance of requirements management. The chapter emphasizes that software measurement provides visibility into program status and facilitates early problem detection. An effective measurement program should be integrated throughout the lifecycle and the data used to manage the program. If problems are found, measurement enables taking corrective actions.
This document provides an overview and introduction to software metrics. It discusses measurement concepts and why measurement is important for software engineering. It covers topics like the basics of measurement, collecting metrics data, analyzing data, and measuring internal and external attributes of software. Specific metrics discussed include size, structure, complexity, reliability, and test coverage. The document is intended to introduce readers to fundamental software metrics concepts.
Software quality metrics provide important insights into software testing efforts and processes. They can help evaluate products and processes against goals, control resources, and predict future attributes. There are three categories of metrics: process, product, and project. Process metrics measure testing efficiency and effectiveness. Product metrics depict product characteristics like size and quality. Project metrics measure schedule, cost, productivity, and code quality. Choosing metrics based on organizational goals and providing feedback are best practices for an effective metrics program.
Software metricsIntroduction
Attributes of Software Metrics
Activities of a Measurement Process
Types
Normalization of Metrics
Help software engineers to gain insight into the design and construction of the software
Activities of a Measurement Process
To answer this we need to know the size & complexity of the projects.
But if we normalize the measures, it is possible to compare the two
For normalization we have 2 ways-
Size-Oriented Metrics
Function Oriented Metrics
This document discusses software metrics and how they can be used to measure various attributes of software products and processes. It begins by asking questions that software metrics can help answer, such as how to measure software size, development costs, bugs, and reliability. It then provides definitions of key terms like measurement, metrics, and defines software metrics as the application of measurement techniques to software development and products. The document outlines areas where software metrics are commonly used, like cost estimation and quality/reliability prediction. It also discusses challenges in implementing metrics and provides categories of metrics like product, process, and project metrics. The remainder of the document provides examples and formulas for specific software metrics.
The document discusses various metrics that can be collected during software testing. It describes metrics for the requirements phase like requirement stability index and requirements leakage index. For the test design phase, it discusses metrics like test case preparation productivity. Metrics for the test execution phase include test case pass percentage and test case execution percentage. Defect metrics covered are defect summary, defect discovery rate, defect severity, defect density, and defect rejection ratio. Formulas for calculating these metrics are provided along with descriptions of what each metric measures.
This document discusses various topics related to software project management and metrics. It describes the roles and skills needed for a software project manager, including motivation, organization, and innovation. It also discusses characteristics of effective project managers such as problem solving, leadership, achievement, and team building. The document outlines several software metrics that can be collected, such as size-oriented metrics, function-oriented metrics, quality metrics, and defect metrics. It provides details on calculating and using function points and discusses measuring aspects of quality like correctness, maintainability, and integrity.
This document discusses different types of software metrics including process, product, and project metrics. It defines metrics as quantitative measures of attributes and discusses how they can be used as indicators to improve processes and projects. Process metrics measure attributes of the development process over long periods of time. Product metrics measure attributes of the software at different stages. Project metrics are used to monitor and control projects. The document also discusses size-oriented and function-oriented metrics for normalization and comparison purposes. It provides examples of calculating function points and deriving metrics like errors per function point.
Software metrics provide quantitative measures to gain insight into software processes and projects. They are used to characterize, evaluate, predict, and improve software. There are challenges to defining effective metrics. Measurement principles include formulating appropriate metrics, collecting data, analyzing results, and providing feedback. Common software quality attributes defined by McCall and ISO standards include functionality, reliability, usability, efficiency, maintainability, and portability. Goal-oriented measurement identifies metrics to answer questions and achieve goals related to software processes and products.
This document provides an overview and introduction to software metrics. It discusses measurement concepts and why measurement is important for software engineering. It covers topics like the basics of measurement, collecting metrics data, analyzing data, and measuring internal and external attributes of software. Specific metrics discussed include size, structure, complexity, reliability, and test coverage. The document is intended to introduce readers to fundamental software metrics concepts.
Software quality metrics provide important insights into software testing efforts and processes. They can help evaluate products and processes against goals, control resources, and predict future attributes. There are three categories of metrics: process, product, and project. Process metrics measure testing efficiency and effectiveness. Product metrics depict product characteristics like size and quality. Project metrics measure schedule, cost, productivity, and code quality. Choosing metrics based on organizational goals and providing feedback are best practices for an effective metrics program.
Software metricsIntroduction
Attributes of Software Metrics
Activities of a Measurement Process
Types
Normalization of Metrics
Help software engineers to gain insight into the design and construction of the software
Activities of a Measurement Process
To answer this we need to know the size & complexity of the projects.
But if we normalize the measures, it is possible to compare the two
For normalization we have 2 ways-
Size-Oriented Metrics
Function Oriented Metrics
This document discusses software metrics and how they can be used to measure various attributes of software products and processes. It begins by asking questions that software metrics can help answer, such as how to measure software size, development costs, bugs, and reliability. It then provides definitions of key terms like measurement, metrics, and defines software metrics as the application of measurement techniques to software development and products. The document outlines areas where software metrics are commonly used, like cost estimation and quality/reliability prediction. It also discusses challenges in implementing metrics and provides categories of metrics like product, process, and project metrics. The remainder of the document provides examples and formulas for specific software metrics.
The document discusses various metrics that can be collected during software testing. It describes metrics for the requirements phase like requirement stability index and requirements leakage index. For the test design phase, it discusses metrics like test case preparation productivity. Metrics for the test execution phase include test case pass percentage and test case execution percentage. Defect metrics covered are defect summary, defect discovery rate, defect severity, defect density, and defect rejection ratio. Formulas for calculating these metrics are provided along with descriptions of what each metric measures.
This document discusses various topics related to software project management and metrics. It describes the roles and skills needed for a software project manager, including motivation, organization, and innovation. It also discusses characteristics of effective project managers such as problem solving, leadership, achievement, and team building. The document outlines several software metrics that can be collected, such as size-oriented metrics, function-oriented metrics, quality metrics, and defect metrics. It provides details on calculating and using function points and discusses measuring aspects of quality like correctness, maintainability, and integrity.
This document discusses different types of software metrics including process, product, and project metrics. It defines metrics as quantitative measures of attributes and discusses how they can be used as indicators to improve processes and projects. Process metrics measure attributes of the development process over long periods of time. Product metrics measure attributes of the software at different stages. Project metrics are used to monitor and control projects. The document also discusses size-oriented and function-oriented metrics for normalization and comparison purposes. It provides examples of calculating function points and deriving metrics like errors per function point.
Software metrics provide quantitative measures to gain insight into software processes and projects. They are used to characterize, evaluate, predict, and improve software. There are challenges to defining effective metrics. Measurement principles include formulating appropriate metrics, collecting data, analyzing results, and providing feedback. Common software quality attributes defined by McCall and ISO standards include functionality, reliability, usability, efficiency, maintainability, and portability. Goal-oriented measurement identifies metrics to answer questions and achieve goals related to software processes and products.
This document discusses various software metrics that can be used at the organization, project, and individual level. At the organization level, key metrics include effort variation, size variation, and schedule variation which impact profitability. Project-level metrics like requirements stability and defect leakage are also important. Individual developers may track defects found during unit testing and code quality metrics. Software metrics provide valuable insight for process improvement, resource allocation, and performance measurement.
This document discusses various software testing metrics including defect density, requirement volatility, test execution productivity, and test efficiency. Defect density measures the number of defects found divided by the size of the software. Requirement volatility measures the percentage of original requirements that were changed. Test execution productivity measures the number of test cases executed per day. Test efficiency measures the percentage of defects found during testing versus post-release. These metrics provide ways to measure software quality and testing effectiveness.
This document discusses software quality and metrics. It defines software quality as conformance to requirements, standards, and implicit expectations. It outlines ISO 9126 quality factors like functionality, reliability, usability, and maintainability. It describes five views of quality: transcendental, user, manufacturing, product, and value-based. It also discusses types of metrics like product, process, and project metrics. Product metrics measure characteristics like size, complexity, and quality level. The document provides guidelines for developing, collecting, analyzing, and interpreting software metrics.
One of the most challenging problems that test managers face involves implementing effective, meaningful, and insightful test metrics. Data and measures are the foundation of true understanding, but the misuse of metrics causes confusion, bad decisions, and demotivation. Rex Black shares how to avoid these unfortunate situations by using metrics properly as part of your test management process. How can we measure our progress in testing a project? What can metrics tell us about the quality of the product? How can we measure the quality of the test process itself? Rex answers these questions, illustrated with case studies and real-life examples. Learn how to use test case metrics, coverage metrics, and defect metrics in ways that demonstrate status, quantify effectiveness, and support smart decision making. Exercises provide immediate opportunities for you to apply the techniques to your own testing metrics. Join Rex to jump-start a new testing metrics program or gain new ideas to improve your existing one.
When designing a bridge, an architect has al sorts of tools & techniques to verify it won't collapse.
Us software developers... We build. And with a bit of luck, we'll even test. And then we pray it doesn’t collapse :)
Can we measure software? Can we compare code objectively? Can we predict problems?
This document discusses software metrics and measurement. It describes how measurement can be used throughout the software development process to assist with estimation, quality control, productivity assessment, and project control. It defines key terms like measures, metrics, and indicators and explains how they provide insight into the software process and product. The document also discusses using metrics to evaluate and improve the software process as well as track project status, risks, and quality. Finally, it covers different types of metrics like size-oriented, function-oriented, and quality metrics.
This document discusses software metrics. It defines a software metric as a standard measure of a software system or process's properties. The document then classifies metrics into product, process, and resource metrics. It describes several types of product metrics including size, complexity, Halstead's, and quality metrics. The document recommends using a GQM (Goal, Question, Metric) approach to implement an effective metrics program. It provides references for further reading.
This document discusses metrics that can be used to measure software processes and projects. It begins by defining software metrics and explaining that they provide quantitative measures that offer insight for improving processes and projects. It then distinguishes between metrics for the software process domain and project domain. Process metrics are collected across multiple projects for strategic decisions, while project metrics enable tactical project management. The document outlines various metric types, including size-based metrics using lines of code or function points, quality metrics, and metrics for defect removal efficiency. It emphasizes integrating metrics into the software process through establishing a baseline, collecting data, and providing feedback to facilitate continuous process improvement.
This document discusses software metrics that can be used to measure process and project attributes. It defines key terms like measurement, measure, metric and indicator. It describes different types of metrics like process metrics, project metrics, size-oriented metrics, function-oriented metrics and quality metrics. It also discusses concepts like defect removal efficiency and redefining defect removal efficiency to measure effectiveness of quality assurance activities.
Software Engineering Practice - Software Metrics and EstimationRadu_Negulescu
This document discusses software metrics and estimation. It introduces common product and project metrics like lines of code, function points, time and effort. It describes the basis for estimation, including measures of size/scope and baseline data from previous projects. It notes the uncertainty in estimation and discusses techniques like lines of code, function points and estimation rules of thumb. The goal is to provide metrics to answer key questions for planning, monitoring and controlling a software project.
Software metrics can be used to measure various attributes of software products and processes. There are direct metrics that immediately measure attributes like lines of code and defects, and indirect metrics that measure less tangible aspects like functionality and reliability. Metrics are classified as product metrics, which measure attributes of the software product, and process metrics, which measure the software development process. Project metrics are used tactically within a project to track status, risks, and quality, while process metrics are used strategically for long-term process improvement. Common software quality attributes that can be measured include correctness, maintainability, integrity, and usability.
This document discusses software quality metrics and classifies them into two categories: process metrics and product metrics. Process metrics are related to the software development process and include software process quality metrics, timetable metrics, error removal effectiveness metrics, and productivity metrics. Product metrics are related to software maintenance and customer service. The document provides examples of specific metrics like error density metrics, error severity metrics, and high-definition quality and productivity metrics.
The document discusses different types of software metrics that can be used to measure various aspects of software development. Process metrics measure attributes of the development process, while product metrics measure attributes of the software product. Project metrics are used to monitor and control software projects. Metrics need to be normalized to allow for comparison between different projects or teams. This can be done using size-oriented metrics that relate measures to the size of the software, or function-oriented metrics that relate measures to the functionality delivered.
The document discusses software test management and planning. It notes that errors found early in the development process are less costly to fix. A graph shows that errors discovered during maintenance are 368 times more expensive to fix than requirements errors. The document recommends optimizing the software process to find errors early. It also provides guidance on test planning, including designing for testability, defining metrics, covering all requirements with tests, and integrating the test plan into the project plan.
The document discusses various concepts related to software testing including verification vs validation, inspection vs testing, black box vs white box testing, and testing techniques like equivalence partitioning, boundary value analysis, and cause-effect graphing. It defines verification as ensuring software meets specifications while validation ensures it meets customer needs. Inspection examines static documents while testing evaluates dynamic behavior. Black box testing uses requirements while white box considers internal structure.
The document discusses software maintenance. It defines software maintenance as modifying software after delivery to correct faults, improve performance, or adapt to environmental changes. Maintenance activities include error correction, enhancing capabilities, deleting obsolete functions, and optimization. Lehman's Law states that software maintenance is evolutionary development and past maintenance informs future decisions. The challenges of software maintenance include technical issues like testing and limited understanding, as well as management issues like cost estimation and priorities.
This document outlines the IEEE standard for a software quality metrics methodology. It describes the purpose of establishing software quality metrics which is to quantify quality factors and ensure quality requirements are met. It provides steps for developing a software quality metrics program which include determining quality requirements, selecting metrics, collecting and storing data, validating metrics, and revalidating metrics when the environment changes. The goal is to use metrics to predict and measure software quality in order to identify issues and ensure compliance with quality standards.
The document discusses various software quality metrics that can be used to measure product quality, in-process quality, and maintenance quality. It describes three main groups of metrics: product metrics, process metrics, and project metrics. Specific metrics discussed include defect density, mean time to failure, customer problems, function points, defect arrival patterns, phase-based defect removal, and customer satisfaction surveys. The purpose of these metrics is to understand relationships between quality factors and improve both the development process and product quality.
SOFTWARE MEASUREMENT ESTABLISHING A SOFTWARE MEASUREMENT PROCESSAmin Bandeali
This document provides guidelines for establishing a software measurement process. It outlines objectives like tying measurement to goals and defining measurements clearly. It recommends following the Plan-Do-Check-Act cycle and using the Entry-Task-Validation-Exit framework. The document also describes starting a software measurement program by forming a focal group, identifying objectives, designing and prototyping the process, documenting and implementing it, then expanding the program. The overall goal is to provide insights into software processes and products to make better decisions through measurement.
Software metrics involve collecting measurements related to software development processes, projects, and products. There are different types of metrics including process, project, and product metrics. Process metrics measure the software development lifecycle, project metrics measure team efficiency, and product metrics measure quality. Metrics can also be private, used by individuals, or public, used to measure teams and processes. Size-oriented metrics are computed based on the size of the software, often expressed in lines of code.
This document discusses various software metrics that can be used at the organization, project, and individual level. At the organization level, key metrics include effort variation, size variation, and schedule variation which impact profitability. Project-level metrics like requirements stability and defect leakage are also important. Individual developers may track defects found during unit testing and code quality metrics. Software metrics provide valuable insight for process improvement, resource allocation, and performance measurement.
This document discusses various software testing metrics including defect density, requirement volatility, test execution productivity, and test efficiency. Defect density measures the number of defects found divided by the size of the software. Requirement volatility measures the percentage of original requirements that were changed. Test execution productivity measures the number of test cases executed per day. Test efficiency measures the percentage of defects found during testing versus post-release. These metrics provide ways to measure software quality and testing effectiveness.
This document discusses software quality and metrics. It defines software quality as conformance to requirements, standards, and implicit expectations. It outlines ISO 9126 quality factors like functionality, reliability, usability, and maintainability. It describes five views of quality: transcendental, user, manufacturing, product, and value-based. It also discusses types of metrics like product, process, and project metrics. Product metrics measure characteristics like size, complexity, and quality level. The document provides guidelines for developing, collecting, analyzing, and interpreting software metrics.
One of the most challenging problems that test managers face involves implementing effective, meaningful, and insightful test metrics. Data and measures are the foundation of true understanding, but the misuse of metrics causes confusion, bad decisions, and demotivation. Rex Black shares how to avoid these unfortunate situations by using metrics properly as part of your test management process. How can we measure our progress in testing a project? What can metrics tell us about the quality of the product? How can we measure the quality of the test process itself? Rex answers these questions, illustrated with case studies and real-life examples. Learn how to use test case metrics, coverage metrics, and defect metrics in ways that demonstrate status, quantify effectiveness, and support smart decision making. Exercises provide immediate opportunities for you to apply the techniques to your own testing metrics. Join Rex to jump-start a new testing metrics program or gain new ideas to improve your existing one.
When designing a bridge, an architect has al sorts of tools & techniques to verify it won't collapse.
Us software developers... We build. And with a bit of luck, we'll even test. And then we pray it doesn’t collapse :)
Can we measure software? Can we compare code objectively? Can we predict problems?
This document discusses software metrics and measurement. It describes how measurement can be used throughout the software development process to assist with estimation, quality control, productivity assessment, and project control. It defines key terms like measures, metrics, and indicators and explains how they provide insight into the software process and product. The document also discusses using metrics to evaluate and improve the software process as well as track project status, risks, and quality. Finally, it covers different types of metrics like size-oriented, function-oriented, and quality metrics.
This document discusses software metrics. It defines a software metric as a standard measure of a software system or process's properties. The document then classifies metrics into product, process, and resource metrics. It describes several types of product metrics including size, complexity, Halstead's, and quality metrics. The document recommends using a GQM (Goal, Question, Metric) approach to implement an effective metrics program. It provides references for further reading.
This document discusses metrics that can be used to measure software processes and projects. It begins by defining software metrics and explaining that they provide quantitative measures that offer insight for improving processes and projects. It then distinguishes between metrics for the software process domain and project domain. Process metrics are collected across multiple projects for strategic decisions, while project metrics enable tactical project management. The document outlines various metric types, including size-based metrics using lines of code or function points, quality metrics, and metrics for defect removal efficiency. It emphasizes integrating metrics into the software process through establishing a baseline, collecting data, and providing feedback to facilitate continuous process improvement.
This document discusses software metrics that can be used to measure process and project attributes. It defines key terms like measurement, measure, metric and indicator. It describes different types of metrics like process metrics, project metrics, size-oriented metrics, function-oriented metrics and quality metrics. It also discusses concepts like defect removal efficiency and redefining defect removal efficiency to measure effectiveness of quality assurance activities.
Software Engineering Practice - Software Metrics and EstimationRadu_Negulescu
This document discusses software metrics and estimation. It introduces common product and project metrics like lines of code, function points, time and effort. It describes the basis for estimation, including measures of size/scope and baseline data from previous projects. It notes the uncertainty in estimation and discusses techniques like lines of code, function points and estimation rules of thumb. The goal is to provide metrics to answer key questions for planning, monitoring and controlling a software project.
Software metrics can be used to measure various attributes of software products and processes. There are direct metrics that immediately measure attributes like lines of code and defects, and indirect metrics that measure less tangible aspects like functionality and reliability. Metrics are classified as product metrics, which measure attributes of the software product, and process metrics, which measure the software development process. Project metrics are used tactically within a project to track status, risks, and quality, while process metrics are used strategically for long-term process improvement. Common software quality attributes that can be measured include correctness, maintainability, integrity, and usability.
This document discusses software quality metrics and classifies them into two categories: process metrics and product metrics. Process metrics are related to the software development process and include software process quality metrics, timetable metrics, error removal effectiveness metrics, and productivity metrics. Product metrics are related to software maintenance and customer service. The document provides examples of specific metrics like error density metrics, error severity metrics, and high-definition quality and productivity metrics.
The document discusses different types of software metrics that can be used to measure various aspects of software development. Process metrics measure attributes of the development process, while product metrics measure attributes of the software product. Project metrics are used to monitor and control software projects. Metrics need to be normalized to allow for comparison between different projects or teams. This can be done using size-oriented metrics that relate measures to the size of the software, or function-oriented metrics that relate measures to the functionality delivered.
The document discusses software test management and planning. It notes that errors found early in the development process are less costly to fix. A graph shows that errors discovered during maintenance are 368 times more expensive to fix than requirements errors. The document recommends optimizing the software process to find errors early. It also provides guidance on test planning, including designing for testability, defining metrics, covering all requirements with tests, and integrating the test plan into the project plan.
The document discusses various concepts related to software testing including verification vs validation, inspection vs testing, black box vs white box testing, and testing techniques like equivalence partitioning, boundary value analysis, and cause-effect graphing. It defines verification as ensuring software meets specifications while validation ensures it meets customer needs. Inspection examines static documents while testing evaluates dynamic behavior. Black box testing uses requirements while white box considers internal structure.
The document discusses software maintenance. It defines software maintenance as modifying software after delivery to correct faults, improve performance, or adapt to environmental changes. Maintenance activities include error correction, enhancing capabilities, deleting obsolete functions, and optimization. Lehman's Law states that software maintenance is evolutionary development and past maintenance informs future decisions. The challenges of software maintenance include technical issues like testing and limited understanding, as well as management issues like cost estimation and priorities.
This document outlines the IEEE standard for a software quality metrics methodology. It describes the purpose of establishing software quality metrics which is to quantify quality factors and ensure quality requirements are met. It provides steps for developing a software quality metrics program which include determining quality requirements, selecting metrics, collecting and storing data, validating metrics, and revalidating metrics when the environment changes. The goal is to use metrics to predict and measure software quality in order to identify issues and ensure compliance with quality standards.
The document discusses various software quality metrics that can be used to measure product quality, in-process quality, and maintenance quality. It describes three main groups of metrics: product metrics, process metrics, and project metrics. Specific metrics discussed include defect density, mean time to failure, customer problems, function points, defect arrival patterns, phase-based defect removal, and customer satisfaction surveys. The purpose of these metrics is to understand relationships between quality factors and improve both the development process and product quality.
SOFTWARE MEASUREMENT ESTABLISHING A SOFTWARE MEASUREMENT PROCESSAmin Bandeali
This document provides guidelines for establishing a software measurement process. It outlines objectives like tying measurement to goals and defining measurements clearly. It recommends following the Plan-Do-Check-Act cycle and using the Entry-Task-Validation-Exit framework. The document also describes starting a software measurement program by forming a focal group, identifying objectives, designing and prototyping the process, documenting and implementing it, then expanding the program. The overall goal is to provide insights into software processes and products to make better decisions through measurement.
Software metrics involve collecting measurements related to software development processes, projects, and products. There are different types of metrics including process, project, and product metrics. Process metrics measure the software development lifecycle, project metrics measure team efficiency, and product metrics measure quality. Metrics can also be private, used by individuals, or public, used to measure teams and processes. Size-oriented metrics are computed based on the size of the software, often expressed in lines of code.
Testing metrics provide objective measurements of software quality and the testing process. They measure attributes like test coverage, defect detection rates, and requirement changes. There are base metrics that directly capture raw data like test cases run and results, and calculated metrics that analyze the base metrics, like first run failure rates and defect slippage. Tracking these metrics throughout testing provides visibility into project readiness, informs management decisions, and identifies areas for improvement. Regular review and interpretation of the metrics is needed to understand their implications and make changes to the development lifecycle.
1. The document discusses software quality and reliability in engineering. It defines quality as software being bug-free, on time, meeting requirements, and maintainable. Reliability is the probability of failure-free operation over time in a given environment.
2. Ensuring quality involves preventing and detecting faults during all phases of the software development life cycle from requirements to testing. The V-model helps achieve quality by involving testers early on.
3. Reliability focuses on avoiding faults during design and detecting problems during all phases through techniques like fault tolerance, forecasting, and measuring metrics like MTBF.
This document provides instructions for creating a UML use case diagram using Star UML software. It explains how to open the software, add a package, actors, use cases, and define relationships between elements using generalizations, associations, and other connections. Screenshots demonstrate each step, such as adding a package and actors, naming elements, and arranging and copying the diagram. The overall process to create a use case diagram is outlined.
Software Test Metrics and MeasurementsDavis Thomas
Explains in detail with example about calculation of -
1.Percentage Test cases Executed [Test Coverage]
2.Percentage Test cases not executed
3.Percentage Test cases Passed
4.Percentage Test cases Failed
5.Percentage Test cases BLOCKED/Deferred
6.Defect Density
7.Defect Removal Efficiency (DRE)
8.Defect Leakage
9.Defect Rejection ratio [Invalid bug ratio]
10.Percentage of Critical defects
11.Percentage of High defects
12.Percentage of Medium defects
13.Percentage of Low/Lowest defects
The document discusses concepts related to data dictionaries, including defining data flows, structures, elements, and stores. It provides examples of how each component would be defined in a data dictionary, including the level of detail needed for accurate documentation and analysis. Specific symbols and notations are presented for representing data structures algebraically and defining element attributes like name, length, type, validation rules, and more. The data dictionary is described as a key tool for analyzing a system's data and ensuring consistency across users and applications.
The document describes a data dictionary, which includes:
1) A notation for describing the content and values of data that a software system will process and create.
2) Information about where and how data items are used.
3) A repository that also contains relationships between data items.
4) Best developed using CASE tools to represent the data dictionary notation and examples.
Software Measurement: Lecture 1. Measures and MetricsProgrameter
Materials of the lecture on metrics and measures held by Programeter leadership during the Software Economics course at Tartu University: courses.cs.ut.ee/2010/se
Web forms are a vital part of ASP.NET applications and are used to create the web pages that clients request. Web forms allow developers to create web applications using a similar control-based interface as Windows applications. The ASP.NET page processing model includes initialization, validation, event handling, data binding, and cleanup stages. The page acts as a container for other server controls and includes elements like the page header.
The document discusses the purpose and importance of a data dictionary for systems analysis and design. It explains that a data dictionary defines the data about data (metadata) and includes information about entities, attributes, relationships, data flows, structures, elements and data stores. It provides examples of how these components are defined and categorized in a data dictionary. The document also outlines the process of analyzing inputs and outputs, developing data stores, and creating the overall data dictionary.
Software re-engineering is a process of examining and altering a software system to restructure it and improve maintainability. It involves sub-processes like reverse engineering, redocumentation, and data re-engineering. Software re-engineering is applicable when some subsystems require frequent maintenance and can be a cost-effective way to evolve legacy software systems. The key advantages are reduced risk compared to new development and lower costs than replacing the system entirely.
The document provides an overview of Unified Modeling Language (UML) including its history, basic building blocks, and types of diagrams. It describes that UML was created in the 1990s to standardize modeling languages and combines concepts from object-oriented analysis and design. The basic building blocks of UML are things (model elements), relationships, and diagrams used to visualize models. There are several types of diagrams for structural and behavioral modeling.
The document discusses software reliability and reliability growth models. It defines software reliability and differentiates it from hardware reliability. It also describes some commonly used software reliability growth models like Musa's basic and logarithmic models. These models make assumptions about fault removal over time to predict how failure rates will change as testing progresses. The key challenges with models are uncertainty and accurately estimating their parameters.
The document discusses data flow diagrams (DFDs) including:
- DFD symbols such as processes, data flows, data stores, and external entities
- Rules for connecting the symbols
- How to create context diagrams and level-0 DFDs to break down a system
- Strategies for developing DFDs such as top-down and bottom-up
It provides an example of drawing a context diagram and level-0 DFD for an order system.
A data dictionary is a central repository that contains metadata about the data in a database. It describes the structure, elements, relationships and other attributes of the data. A well-designed database will include a data dictionary to provide information about the type of data in each table, row and column without accessing the actual database. This ensures data consistency when multiple users access the database. A data dictionary can be integrated with the database management system or be a standalone tool. It should be easily accessible and searchable by all database users.
This document defines a data flow diagram (DFD) and its components. A DFD is a graphical representation of how data flows through a system. It shows external entities, processes, data stores, and data flows. External entities interact with the system, processes manipulate data, data stores hold data, and data flows show the movement of data. The document provides examples of DFD symbols and components. It also explains that DFDs can be leveled to show more detail at each level, with level 0 providing an overview and higher levels showing more granular processes.
This document discusses function-oriented software design. It explains that function-oriented design represents a system as a set of functions that transform inputs to outputs. The chapter objectives are to explain function-oriented design, introduce design notations, illustrate the design process with an example, and compare sequential, concurrent and object-oriented design strategies. Topics covered include data-flow design, structural decomposition, detailed design, and a comparison of design strategies.
The document describes data flow diagrams (DFDs), including how they differ from flowcharts by showing the flow of data rather than control flow. It then provides steps for creating DFDs using an example of a lemonade stand: 1) List activities, 2) Create a context-level DFD identifying sources and sinks, 3) Create a level 0 DFD identifying subprocesses, and 4) Create level 1 DFDs decomposing subprocesses and identifying data stores.
This thesis document describes Sadia Sharmin's research on software defect prediction. The document includes an abstract that discusses the importance of attribute selection for building accurate defect prediction models. It also lists publications from the research and acknowledges those who supported the research. The body of the document contains chapters that provide background on defect prediction, review related work, describe the proposed methodology called SAL, present results, and draw conclusions.
This document provides an overview of software estimation techniques. It discusses why estimation is important, the general estimation process, and factors that impact accuracy such as requirements management and experience. The document also describes different estimation methods like expert judgment, analogy-based, top-down, and bottom-up. It provides examples of estimation tools like Function Point Analysis and COCOMO II for sizing and effort estimation.
This document discusses productivity improvement initiatives in the software industry. It examines challenges in measuring software productivity, including why it should be measured, who should collect data, and what should be measured. Some key factors that affect productivity are identified, such as product characteristics, the development process, and work environment settings. Approaches like Lean are presented as ways to eliminate waste and improve flow. A case study example demonstrates specific measures taken by a company to enhance productivity, such as process optimization and testing improvements. The conclusion is that understanding software productivity factors can help organizations implement effective initiatives to reduce costs and improve quality.
RCA on Residual defects – Techniques for adaptive Regression testingIndium Software
One of the important issues associated with a system lifespan view that we have ignored in past
years is the effects of enduring defects – defects that persist undetected – across several
releases of a system. Many studies performed to date have evaluated regression testing
techniques under the limited context such as short term assessment which do not fully account
for the industry based solutions.
Reports estimate that regression testing consumes as much as 80% of the overall testing
budget and can consume up to 50% of the cost of software maintenance.
Chapter 11 Metrics for process and projects.pptssuser3f82c9
This document discusses software process and project metrics. It describes two types of metrics - process metrics and project metrics. Process metrics are collected across projects over long periods of time to enable long-term process improvement. Project metrics enable project managers to assess project status, track risks, uncover problems, adjust work, and evaluate team ability. Measurement data is collected by projects and converted to process metrics for software improvement.
Effectiveness of software product metrics for mobile application tanveer ahmad
This document discusses the effectiveness of software product metrics for mobile applications. It defines effectiveness and explores how metrics can be used to measure the quality, performance, and efficacy of mobile apps. The document reviews literature on software metrics and their importance. It also examines different types of product metrics like size, complexity, and defect metrics. Finally, it proposes using a statistical simulation approach and developing a new measurement tool called the Effectiveness Calculation Model for Mobile Applications to quantify mobile app performance using computational mathematics.
Identifying Software Performance Bottlenecks Using Diagnostic Tools- Impetus ...Impetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
This paper focuses on software performance diagnostic tools and how they enable rapid bottleneck identification.
This document discusses identifying the information needs that should drive an organization's measurement process. It begins by describing the measurement process model from ISO, PSM, and CMMI standards which place information needs outside the core measurement process. It then provides 9 common types of information needs found in software and systems organizations: 1) current management practices, 2) requirements measures, 3) historical program risks, 4) current program risks, 5) processes being improved, 6) software quality measures, 7) program plan assumptions, 8) process performance data, and 9) organizational policy needs. Identifying an organization's true information needs ensures the measurement process provides useful information to managers for monitoring and controlling programs.
This document provides an overview of software testing and is intended for beginners. It discusses key topics such as the software development life cycle, types of testing, test planning and case development, defect tracking, test automation, and certifications. The document is presented over multiple pages and sections covering these essential software testing concepts and processes at a high level to introduce new testers to the field.
Implementation of Risk-Based Approach for Quality & Cost OptimizationSonata Software
As a practiced trend in IT projects, Testing is performed only towards the end of a project. Teams
dedicate hours to test possible risks and flaws after the project is ready to run. As software testing at
this level invites several last minute modifications that can cause discomfort, or sometimes even refute
the very concept of the project, it has become the need of the hour to come up with a way to ensure
detection and reduction of risks, at an early stage of the project. Risk-Based Testing, or RBT as referred
to in this paper, is a procedure in software testing which is used to prioritize the development and
execution of tests based upon the impact and likelihood of failure of the functionality or aspect being
tested based on existing patterns of risk.
Through this testing technique, a software test
engineer can now select tests based on risk even before the initiation of the projectThis paper outlines the Risk-Based Testing approach and describes how Risk-Based Testing can positively impact the development life-cycle based on business-oriented factors, offering organizations an actionable plan for starting a Risk-Based Testing approach for projects.
A Combined Approach of Software Metrics and Software Fault Analysis to Estima...IOSR Journals
The document presents a software fault prediction model that uses reliability relevant software metrics and a fuzzy inference system. It proposes predicting fault density at each phase of development using relevant metrics for that phase. Requirements metrics like complexity, stability and reviews are used to predict fault density after requirements. Design, coding and testing metrics are similarly used to predict fault densities after their respective phases. The model aims to enable early identification of quality issues and optimal resource allocation to improve reliability. MATLAB is used to define fault parameters, categories, fuzzy rules and analyze results. The goal is a multistage fault prediction model for more reliable software delivery.
This document discusses defect prediction models in software development. It begins by covering the importance of effort estimation in software maintenance planning and management. The document then discusses how data from software defect reports, including details on defects, components, testers and fixes, can be used to build reliability models to predict remaining defects. Machine learning and data mining techniques are proposed to analyze relationships between software quality across releases and to construct predictive models for forecasting time to fix defects. The document provides an overview of typical software development processes and then discusses a two-step approach to defect prediction and analysis using appropriate statistics and data mining techniques.
The Practical Software Measurement (PSM) Process provides a framework for implementing an effective software measurement process. It involves three key activities: 1) tailoring measures to the specific project, 2) applying measures by collecting and analyzing data, and 3) implementing the measurement process within the organization. The goal is to provide objective information to support project management decisions regarding cost, schedule, and technical objectives. PSM is a flexible, issue-driven process based on measurement best practices.
Modern Software Productivity Measurement: The Pragmatic GuideCAST
The guide explains why software productivity measurement is crucial for productivity and quality improvement programs, as well as presents best practices for collecting, analyzing, and using productivity data. Recommendations are also included to implement an effective productivity improvement program.
The document discusses software project estimation and scheduling. It explains that accurate estimation is important to avoid cost overruns and schedule slips. Common problems with estimation include predicting software costs, schedules, and risks. Fundamental estimation questions relate to effort, time, and costs. Estimation methods include expert judgment, analogy, top-down and bottom-up approaches, and algorithmic modeling. Productivity and function measures are used to estimate software size. Changing technologies also impact estimation accuracy. Compression techniques can shorten schedules through crashing or fast tracking.
Developing software analyzers tool using software reliability growth modelIAEME Publication
The document discusses developing a software analyzer tool using a software reliability growth model to improve software quality. It proposes an Enhanced Non-Homogeneous Poisson Process (ENHPP) model to estimate software reliability measures like remaining faults and failure rate. The ENHPP model explicitly incorporates a time-varying testing coverage function and allows for imperfect debugging and coverage changes over testing and operation. It is validated on real failure data sets and shown to provide better fit than existing models. The goal is to enhance code reusability, minimize test effort estimation and improve reliability through the testing phase of the software development life cycle.
Developing software analyzers tool using software reliability growth modelIAEME Publication
The document discusses developing a software analyzer tool using a software reliability growth model to improve software quality. It proposes an Enhanced Non-Homogeneous Poisson Process (ENHPP) model to estimate software reliability measures like remaining faults and failure rate. The ENHPP model explicitly incorporates a time-varying testing coverage function and allows for imperfect debugging and coverage changes over testing and operation. It is validated on real failure data sets and shown to provide better fit than existing models. The goal is to enhance code reusability, minimize test effort estimation and improve reliability through the testing phase of the software development life cycle.
IRJET- A Novel Approach on Computation Intelligence Technique for Softwar...IRJET Journal
This document describes a study that uses an Adaptive Neuro-Fuzzy Inference System (ANFIS) to predict software defects early in the development process. The study uses metrics data from NASA software projects to train and test the ANFIS model. The results show that the ANFIS model is able to accurately predict defects, with low root mean square error values for both the training and testing data, indicating the model was able to generalize without overfitting. The study concludes ANFIS is an effective technique for software defect prediction that can help improve quality and reduce costs.
This document provides an overview and introduction to software testing for beginners. It discusses what software testing is, why it's important, and what testers do. Some key points covered include:
- The goal of testing is to find bugs early and ensure quality by designing and executing test cases that cover functionality, security, databases, and user interfaces.
- A good tester has skills like communication, organization, troubleshooting, and being methodical and objective in their work.
- Testing occurs at all stages of the software development life cycle from initial specifications through coding, testing, deployment and maintenance.
This document provides an overview of software testing for beginners. It discusses what software testing is, why it's important, and the roles and skills of testers. It also covers the software development and testing lifecycles, common errors, test planning, case development techniques, defect tracking, and types of test reports. The goal is to help beginners gain practical knowledge about software testing processes in real work environments.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
Chap13
1. Part 3: Management GSAM Version 3.0
Chapter 13
Software Estimation,
Measurement, and
Metrics
2. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
Contents
13.1 Chapter Overview .................................................................................. 13-4
13.2 Software Estimation ............................................................................... 13-4
13.2.1 Measuring Software Size ................................................................... 13-6
13.2.2 Source Lines-of-Code Estimates ....................................................... 13-7
13.2.3 Function Point Size Estimates........................................................... 13-7
13.2.4 Feature Point Size Estimates.............................................................. 13-9
13.2.5 Cost and Schedule Estimation Methodologies/Techniques ............. 13-10
13.2.6 Ada-Specific Cost Estimation ......................................................... 13-12
13.3 Software Measurement ......................................................................... 13-13
13.3.1 Measures and Metrics, and Indicators ............................................. 13-13
13.3.2 Software Measurement Life Cycle................................................... 13-15
13.3.3 Software Measurement Life Cycle at Loral...................................... 13-16
13.4 Software Measurement Process ........................................................... 13-18
13.4.1 Metrics Plan ................................................................................... 13-19
13.4.2 Activities That Constitute a Measurement Process.......................... 13-21
13.4.2.1 Planning ................................................................................. 13-21
13.4.2.1.1 Define Information Needs............................................... 13-21
13.4.2.1.2 Define Indicators and Analysis Methods to Address
Information Needs ......................................................... 13-21
13.4.2.1.3 Define the Selected Measures ......................................... 13-21
13.4.2.1.4 Metrics Selection ............................................................ 13-22
13.4.3 Typical Software Measurements and Metrics.................................. 13-24
13.4.3.1 Quality ................................................................................... 13-24
13.4.3.2 Size ........................................................................................ 13-25
13.4.3.3 Complexity ............................................................................. 13-25
13.4.4 Requirements ................................................................................. 13-25
13.4.5 Effort.............................................................................................. 13-26
13.4.6 Productivity .................................................................................... 13-26
13.4.7 Cost and Schedule .......................................................................... 13-28
13.4.8 Scrap and Rework .......................................................................... 13-29
13.4.9 Support .......................................................................................... 13-30
13.4.9.1 Define the Collection Process of the Measurement Data ......... 13-30
13.4.9.2 Implementation....................................................................... 13-30
13.4.9.3 Collect the Measurement Data ................................................ 13-31
13.4.9.4 Analyze the Measurement Data to Derive Indicators .............. 13-31
13.4.9.5 Manage the Measurement Data and Indicators ....................... 13-33
13-2
3. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
13.4.9.6 Report the Indicators .............................................................. 13-33
13.4.9.7 Evaluation .............................................................................. 13-33
13.4.9.8 Review Usability of Indicators ............................................... 13-33
13.4.9.9 Begin Measurement Activities Early ....................................... 13-34
13.5 Cautions About Metrics ....................................................................... 13-34
13.6 Addressing Measurement in the Request for Proposal (RFP) ........... 13-35
13.7 References ............................................................................................ 13-36
13-3
4. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
13.1 Chapter Overview
Poor size estimation is one of the main reasons major software-intensive acquisition programs
ultimately fail. Size is the critical factor in determining cost, schedule, and effort. The failure to
accurately predict (usually too small) results in budget overruns and late deliveries which
undermine confidence and erode support for your program. Size estimation is a complicated
activity, the results of which must be constantly updated with actual counts throughout the life
cycle. Size measures include source lines-of-code, function points, and feature points. Complexity
is a function of size, which greatly impacts design errors and latent defects, ultimately resulting
in quality problems, cost overruns, and schedule slips. Complexity must be continuously measured,
tracked, and controlled. Another factor leading to size estimate inaccuracies is requirements
creep which also must be baselined and diligently controlled.
Any human-intensive activity, without control, deteriorates over time. It takes constant attention
and discipline to keep software acquisition and development processes from breaking down —
let alone improving them. If you do not measure, there is no way to know whether processes are
on track or if they are improving. Measurement provides a way for you to assess the status of
your program to determine if it is in trouble or in need of corrective action and process
improvement. This assessment must be based on up-to-date measures that reflect current program
status, both in relation to the program plan and to models of expected performance drawn from
historical data of similar programs. If, through measurement, you diagnose your program as
being in trouble, you will be able take meaningful, effective remedial action (e.g., controlling
requirements changes, improving response time to the developer, relaxing performance
requirements, extending your schedule, adding money, or any number of options). [See Chapter
14, The Management Challenge, for a discussion on remedial actions for troubled programs.]
Measurement provides benefits at the strategic, program, and technical levels.
A good measurement program is an investment in success by facilitating early detection of
problems, and by providing quantitative clarification of critical development issues. Metrics
give you the ability to identify, resolve, and/or curtail risk issues before they surface. Measurement
must not be a goal in itself. It must be integrated into the total software life cycle — not independent
of it. To be effective, metrics must not only be collected — they must be used! Campbell and
Koster summed up the bottom line for metrics when they exclaimed:
“If you ain’t measurin,’ you ain’t managin’ — you’re only along for the ride (downhill)!”
[CAMPBELL95]
13.2 Software Estimation
Just as we typically need to determine the weight, volume, and dynamic flight characteristics of
a developmental aircraft as part of the planning process, you need to determine how much software
to build. One of the main reasons software programs fail is our inability to accurately estimate
software size. Because we almost always estimate size too low, we do not adequately fund or
allow enough time for development. Poor size estimates are usually at the heart of cost and
schedule overruns.
13-4
5. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
The key to credible software sizing is to use a variety of software sizing techniques, and not to
rely on a single source or method for the estimate. Reliance on a single source or technique
represents a major contribution to program cost and schedule risk, especially if it is based
exclusively on a contractor’s bid. There are two common types of size inaccuracies for which
you can compensate to some degree.
1. Normal statistical inaccuracies can be dealt with by using multiple data sources and estimating
methodologies, or by using multiple organizations to do the estimating and check and analyze
results.
2. The earlier the estimate is made — the less is known about the software to be developed —
and the greater the estimating errors. [See Figure 13-1]
Basing your estimates on more than one source is sound advice for both types of discrepancies.
Because accuracy can be improved if estimates are performed with smaller product elements,
base your estimates on the smallest possible unit of each component. Then compile these
calculations into composite figures. [HUMPHREY89]
Classes of people, Example sources of uncertainty,
data sources to support h u m a n - m achine interface software
4x Query types, data loads,
intelligent-terminal tradeoffs,
response times
Internal data structure,
2.0x buffer handling techniques
Detailed scheduling algorithms,
1.5x error handling
Programmer understanding
1.25x
Relative cost range
of specifications
x
0.8x
.067x
0.5x
Product Detailed
0.25x
Concept of Requirements design design Accepted
operation S p e c ifications specifications specifications software
Feasibility Plans and Product Detailed D e v e lo p m e n t a n d t e s t
requirements design design
Phases and milestones
Figure 13-1. Software Cost Estimation Accuracy Versus Phase [BOEHM81]
Given our shortcomings in size estimation, it is absolutely critical that you measure, track, and
control software size throughout development. You need to track the actual software size against
original estimates (and revisions) both incrementally and for the total build. Analysis is necessary
to determine trends in software size and functionality progress. Data requirements for these
measures are stated in Contract Data Requirements List (CDRL) items and should include:
13-5
6. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
• The number of distinct functional requirements in the Software Requirement Specification
(SRS) and Interface Requirement Specification (IRS),
• The number of software units contained in the Software Development Plan (SDP) or Software
Design Description (SDD), and
• Source lines-of-code (SLOC) or function point estimates for each computer software
configuration item (CSCI) and build compared to the actual source code listing for each
software unit.
Software size has a direct effect on overall development cost and schedule. Early significant
deviations in software size data indicate problems such as:
• Problems in the model(s), logic, and rationale used to develop the estimates,
• Problems in requirements stability, design, coding, and process,
• Unrealistic interpretation of original requirements and resource estimates to develop the system,
and
• Faulty software productivity rate estimates.
Significant departures from code development estimates should trigger a risk assessment of the
present and overall effort. Size-based models should be revisited to compare your development
program with those of similar domain, scope, size, and complexity, if possible.
13.2.1 Measuring Software Size
There are three basic methods for measuring software size. Historically, the primary measure of
sof w are si has been t num ber SLOC. However, it is difficult to relate software functional
t ze he
requirements to SLOC, especially during the early stages of development. An alternative method,
function points, should be used to estimate software size. Function points are used primarily for
management information systems (MISs), whereas, feature points (similar to function points) are
used for real-time or embedded systems. [PUTNAM92] SLOC, function points, and feature
points are valuable size estimation techniques. Table 13-1 summarizes the differences between
the function point and SLOC methods.
FUNCTION POINTS SOURCE LINES-OF-CODE
Specification-based Analogy-based
Language independent Language dependent
User-oriented Design-oriented
Variations a function of Variations a function of
counting conventions languages
Expandable to source Convertible to function
lines-of-code points
Table 13-1. Function Points Versus Lines-of-Code
13-6
7. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
13.2.2 Source Lines-of-Code Estimates
Most SLOC estimates count all executable instructions and data declarations but exclude
comments, blanks, and continuation lines. SLOC can be used to estimate size through analogy
— by comparing the new software’s functionality to similar functionality found in other historic
applications. Obviously, having more detailed information available about the functionality of
the new software provides the basis for a better comparison. In theory, this should yield a more
credible estimate. The relative simplicity of the SLOC size measure facilitates automated and
consistent (repeatable) counting of actual completed software size. It also enables recording size
data needed to prepare accurate estimates for future efforts. The most significant advantage of
SLOC estimates is that they directly relate to the software to be built. The software can then be
measured after completion and compared with your initial estimates. [HUMPHREY89] If you
are using SLOC with a predictive model (e.g., Barry Boehm’s COnstructive COst MOdel
(COCOMO)), your estimates will need to be continually updated as new information is available.
Only through this constant re-evaluation can the predictive model provide estimates that
approximate actual costs.
A large body of literature and historical data exists that uses SLOC, or thousands of source lines-
of-code (KSLOC), as the size measure. Source lines-of-code are easy to count and most existing
software estimating models use SLOCs as the key input. However, it is virtually impossible to
estimate SLOC from initial requirements statements. Their use in estimation requires a level of
detail that is hard to achieve (i.e., the planner must often estimate the SLOC to be produced
before sufficient detail is available to accurately do so.) [PRESSMAN92]
Because SLOCs are language-specific, the definition of how SLOCs are counted has been
troublesome to standardize. This makes comparisons of size estimates between applications
written in different programming languages difficult even though conversion factors are available.
From SLOC estimates a set of simple, size-oriented productivity and quality metrics can be
developed for any given on-going program. These metrics can be further refined using productivity
and quality equations such as those found in the basic COCOMO model.
13.2.3 Function Point Size Estimates
Function points, as defined by A.J. Albrecht, are the weighted sums of five different factors that
relate to user requirements:
• Inputs,
• Outputs,
• Logic (or master) files,
• Inquiries, and
• Interfaces. [ALBRECHT79]
The International Function Point Users Group (IFPUG) is the focal point for function point
definitions. The basic definition of function points provided above has been expanded by several
others to include additional types of software functionality, such as those related to embedded
weapons systems software (i.e., feature points).
13-7
8. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
Function points are counted by first tallying the number of each type of function listed above.
These unadjusted function point totals are subsequently adjusted by applying complexity measures
to each type of function point. The sum of the total complexity-adjusted function points (for all
types of function points) becomes the total adjusted function point count. Based on prior
experience, the final function point figure can be converted into a reasonably good estimate of
required development resources. [For more information on function point counting, see the
“Counting Practices Manual” available from the IFPUG administrative office in Westerville,
Ohio for a nominal charge, (614) 895-3170 or Fax (614) 895-3466.]
Table 13-2 illustrates a function point analysis for a nominal program. First you count the number
of inputs, outputs, inquiries, logic files, and interfaces required. These counts are then multiplied
by established values. The total of these products is adjusted by the degree of complexity based
on the estimator’s judgment of the software’s complexity. Complexity judgments are domain-
specific and include factors such as data communications, distributed data processing,
performance, transaction rate, on-line data entry, end-user efficiency, reusability, ease of
installation, operation, change, or multiple site use. This process for our nominal program is
illustrated in Figure 13-2.
Simple Average Complex Total
Inputs 3X ___ 4X _2_ 6X _2_ 20
Outputs 4X _1_ 5X _3_ 7X ___ 19
Inquiries 3X ___ 4X ___ 6X ___ 0
Files 7X ___ 10X _1_ 15X ___ 10
Interfaces 5X ___ 7X ___ 10X ___ 10
UNADJUSTED FUNCTION POINTS =59
Table 13-2. Function Point Computation
13-8
9. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
Application Boundary
Inputs
Function = Out ut Files
Points Inquiries
Adjustment
Factors
Interfaces
(Information Processing (Complexity
Size) Adjustment)
(Raw function points) (0.65 + 1% of total influence of
adjustment factors)
Function Points = (59) (0.65 + 0.47 = (59) (1.12) = 66
SLOCs = (66 FPs) (90 'C' SLOCs/FP) = 5,940 SLOCs
Figure 13-2. Function Point Software Size Computational Process
While function points aid software size estimates, they too have drawbacks. At the very early
stages of system development, function points are also difficult to estimate. Additionally, the
complexity factors applied to the equation are subjective since they are based on the analyst/
engineer’s judgment. Few automated tools are available to count either unadjusted or adjusted
function points, making comparisons between or among programs difficult, and making the
function point counts for any single program inconsistent when calculated by different analysts.
However, function points are valuable in making early estimates, especially after the SRS has
been completed. Like SLOC, they too are affected by changes in system and/or software
requirements. Also, as a relatively new measure of software size, there are few significant,
widely-available databases for estimating function points by comparison (analogy) to functionally
similar historic software applications.
13.2.4 Feature Point Size Estimates
A derivative of function points, feature points were developed to estimate/measure real-time
systems software with high algorithmic complexity and generally fewer inputs/outputs than MISs.
Algorithms are sets of mathematical rules expressed to solve significant computational problems.
For example, a square root extraction routine, or a Julian date conversion routine, are algorithms.
In addition to the five standard function point parameters, feature points include an algorithm(s)
parameter which is assigned the default weight of 3. The feature point method reduces the
empirical weights for logical data files from a value of 10 to 7 to account for the reduced
significance of logical files in real-time systems. For applications in which the number of
algorithms and logical data files are the same, function and feature point counts generate the
same numeric values. But, when there are more algorithms than files, feature points produce a
greater total than function points. Table 13-3 illustrates the ratio of function point to feature
point counts for selected applications. [For a more detailed explanation of feature points, see
Capers Jones, Applied Software Measurement.] [JONES91]
13-9
10. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
APPLICATION FUNCTION FEATURE
POINTS POINTS
Batch MIS projects 1 0.80
On-line MIS projects 1 1.00
On-line database projects 1 1.00
Switching systems projects 1 1.20
Embedded real-time projects 1 1.35
Factory automation projects 1 1.50
Diagnostic and prediction projects 1 1.75
Table 13-3. Ratios of Feature Points to Function Points [JONES91]
NOTE: See Volume 2, Appendix H, “Counting Rules for Function Points and Feature
Points.”
13.2.5 Cost and Schedule Estimation Methodologies/Techniques
Most estimating methodologies are predicated on analogous software programs. Expert opinion
is based on experience from similar programs; parametric models stratify internal data bases to
simulate environments from many analogous programs; engineering builds reference similar
experience at the unit level; and cost estimating relationships (like parametric models) regress
algorithms from several analogous programs. Deciding which of these methodologies (or
combination of methodologies) is the most appropriate for your program usually depends on
availability of data, which is in turn depends on where you are in the life cycle or your scope
definition.
• Analogies. Cost and schedule are determined based on data from completed similar efforts.
When applying this method, it is often difficult to find analogous efforts at the total system
level. It may be possible, however, to find analogous efforts at the subsystem or lower level
computer software configuration item/computer software component/computer software unit
(CSCI/CSC/CSU). Furthermore, you may be able to find completed efforts that are more or
less similar in complexity. If this is the case, a scaling factor may be applied based on expert
opinion (e.g., CSCI-x is 80% as complex). After an analogous effort has been found, associated
data need to be assessed. It is preferable to use effort rather than cost data; however, if only
cost data are available, these costs must be normalized to the same base year as your effort
using current and appropriate inflation indices. As with all methods, the quality of the estimate
is directly proportional to the credibility of the data.
• Expert (engineering) opinion. Cost and schedule are estimated by determining required
effort based on input from personnel with extensive experience on similar programs. Due to
the inherent subjectivity of this method, it is especially important that input from several
independent sources be used. It is also important to request only effort data rather than cost
data as cost estimation is usually out of the realm of engineering expertise (and probably
dependent on non-similar contracting situations). This method, with the exception of rough
orders-of-magnitude estimates, is rarely used as a primary methodology alone. Expert opinion
is used to estimate lower-level, low cost, pieces of a larger cost element when a labor-intensive
cost estimate is not feasible.
13-10
11. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
• Parametric models. The most commonly-used technology for software estimation is
parametric models, a variety of which are available from both commercial and government
sources. The estimates produced by the models are repeatable, facilitating sensitivity and
domain analysis. The models generate estimates through statistical formulas that relate a
dependent variable (e.g., cost, schedule, resources) to one or more independent variables.
Independent variables are called “cost drivers” because any change in their value results in
a change in the cost, schedule, or resource estimate. The models also address both the
development (e.g., development team skills/experience, process maturity, tools, complexity,
size, domain, etc.) and operational (how the software will be used) environments, as well as
software characteristics. The environmental factors, used to calculate cost (manpower/effort),
schedule, and resources (people, hardware, tools, etc.), are often the basis of comparison
among historical programs, and can be used to assess on-going program progress.
Because environmental factors are relatively subjective, a rule of thumb when using parametric
models for program estimates is to use multiple models as checks and balances against each
other. Also note that parametric models are not 100 percent accurate. Boehm states:
“Today, a software cost estimation model is doing well if it can estimate software development
costs within 20% of the actual costs, 70% of the time, and on its own home turf (that is, within the
class of projects to which it is calibrated…. This means that the model’s estimates will often be
much worse when it is used outside its domain of calibration.” [BOEHM81]
Boehm’s assertion is still valid today as reported by Ferens and Christensen. They report:
“…in general, model validation showed that the accuracy of the models was no better than within
25 percent of actual development cost or schedule, about one half of the time, even after calibration.”
[FERENS00]
In all fairness, the databases used to achieve the results reported in [FERENS00] were not from
a single entity, but a compilation of data from several software developers. One would assume as
developers increase their maturity levels and gather data, their ability to estimate required resources
would improve. However, if dealing with developers that do not have a good estimation database,
the wise project manager will plan for cost and schedule to be 30% to 50% higher than the
estimates provided by the parametric cost models.
13-11
12. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
• Engineering build (grass roots, or bottoms-up build). Cost and schedule are determined by
estimating effort based on the effort summation of detailed functional breakouts of tasks at
the lowest feasible level of work. For software, this requires a detailed understanding of the
software architecture. Analysis is performed at the CSC or CSU level and associated effort is
predicted based on unit level comparisons to similar units. Often, this method is based on a
notional system of government estimates of most probable cost and used in source selections
before contractor solutions are known. This method is labor-intensive and is usually performed
with engineering support; however, it provides better assurance than other methods that the
entire development scope is captured in the resulting estimate.
• Cost Performance Report (CPR) analysis. Future cost and schedule estimates are based on
current progress. This method may not be an optimal choice for predicting software cost and
schedule because software is generally developed in three distinct phases (requirements/
design, code/unit test, integration/test) by different teams. Apparent progress in one phase
may not be predictive of progress in the next phases, and lack of progress in one phase may
not show up until subsequent phases. Difficulty in implementing a poor design may occur
without warning, or problems in testing may be the result of poor test planning or previously
undetected coding defects. CPR analysis can be a good starting point for identifying problem
areas, and problem reports included with CPRs may provide insight for risk assessments.
• Cost estimating relationships (CERs)/factors. Cost and schedule are estimated by determining
effort based on algebraic relationships between a dependent (effort or cost) variable and
independent variables. This method ranges from using simple factor, such as cost per line-
of-code on similar program with similar contractors, to detailed multi-variant regressions
based on several similar programs with more than one causal (independent) variable. Statistical
packages are commercially available for developing CERs, and if data are available from
several completed similar programs (which is not often the case), this method may be a
worthwhile investment for current and future cost and schedule estimating tasks. Parametric
model developers incorporate a series of CERs into an automated process by which parametric
inputs determine which CERs are appropriate for the program at hand.
Of these techniques, the most commonly used is parametric modeling. There is currently no list
of recommended or approved models; however, you will need to justify the appropriateness of
the specific model or other technique you use in an estimate presented for DAB and/or MAISARC
Review. As mentioned above, determining which method is most appropriate is driven by the
availability of data. Regardless of which method used, a thorough understanding of your software’s
functionality, architecture, and characteristics, and your contract is necessary to accurately estimate
required effort, schedule, and cost.
NOTE: Refer to “A Manager’s Checklist for Validating Software Cost and Schedule
Estimates,” CMU/SEI-95-SR-04, and “Checklists and Criteria for Evaluating the Cost and
Schedule Estimating Capabilities of Software Organizations,” CMU/SEI-95-SR-05.
13.2.6 Ada-Specific Cost Estimation
Using Ada-specific models is necessary because Ada developments do not follow the classic
patterns included in most traditional cost models. As stated above, the time and effort required
during the design phase are significantly greater (50% for Ada as opposed to 20% for non-Ada
software developments). [Ada/C++91] Another anomaly with Ada developments is productivity
13-12
13. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
rates. Traditional non-Ada developments have historically recorded that productivity rates decrease
as program size increases. With Ada, the opposite is often true. Due in large to Ada reusability,
the larger the program size — the greater the productivity rate.
13.3 Software Measurement
You cannot build quality software, or improve your process, without measurement. Measurement
is essential to achieving the basic management objectives of prediction, progress, and process
improvement. An oft-repeated phrase by DeMarco holds true; “You can’t manage what you
can’t measure!” [DeMARCO86] All process improvement must be based on measuring where
you have been, where you are now, and properly using the data to predict where you are heading.
Collecting good metrics and properly using them always leads to process improvement!
13.3.1 Measures and Metrics, and Indicators
A software measurement is a quantifiable dimension, attribute, or amount of any aspect of a
software program, product, or process. It is the raw data which are associated with various
elements of the software process and product. Table 13-4 gives some examples of useful
management measures.
AREA MEASURES
Requirements • CSCI requirements
• CSCI design stability
Performance • Input/output bus throughout
capability
• Processor memory utilization
• Processor throughout put
utilization
Schedule • Requirements allocation status
• Preliminary design status
• Code and unit test status
• Integration status
Cost • Person-months of effort
• Software size
Table 13-4. Example Management Indicators
Metrics (or indicators) are computed from measures. They are quantifiable indices used to compare
software products, processes, or projects or to predict their outcomes. With metrics, we can:
• Monitor requirements,
• Predict development resources,
• Track development progress, and
• Understand maintenance costs.
13-13
14. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
Metrics are used to compare the current state of your program with past performance or prior
estimates and are derived from earlier data from within the program. They show trends of
increasing or decreasing values, relative only to the previous value of the same metric. They also
show containment or breeches of pre-established limits, such as allowable latent defects.
Metrics are also useful for determining a “business strategy” (how resources are being used and
consumed). For example, in producing hardware, management looks at a set of metrics for scrap
and rework. From a software standpoint, you will want to see the same information on how
much money, time, and manpower the process consumes that does not contribute to the end
product. One way a software program might consume too many resources is if errors made in
the requirements phase were not discovered and corrected until the coding phase. Not only does
this create rework, but the cost to correct an error during the coding phase that was inserted
during requirements definition is approximately 50% higher to correct than one inserted and
corrected during the coding phase. [BOEHM81] The key is to catch errors as soon as possible
(i.e., in the same phase that they are induced).
Management metrics are measurements that help evaluate how well the acquisition office is
proceeding in accomplishing their acquisition plan or how well the contractor is proceeding in
accomplishing their Software Development Plan. Trends in management metrics support forecasts
of future progress, early trouble detection, and realism in plan adjustments. Software product
attributes are measured to arrive at product metrics which determine user satisfaction with the
delivered product or service. From the user’s perspective, product attributes can be reliability,
ease-of-use, timeliness, technical support, responsiveness, problem domain knowledge and
understanding, and effectiveness (creative solution to the problem domain). Product attributes
are measured to evaluate software quality factors, such as efficiency, integrity, reliability,
survivability, usability, correctness, maintainability, verifiability, expandability, flexibility,
portability, reusability, and interoperability. Process metrics are used to gauge organizations,
tools, techniques, and procedures used to develop and deliver software products. [PRESSMAN92]
Process attributes are measured to determine the status of each phase of development (from
requirements analysis to user acceptance) and of resources (dollars, people, and schedule) that
impact each phase.
NOTE: Despite the SEI’s Software Capability Evaluation (SCE) methods, process
efficiency can vary widely within companies rated at the same maturity levels, and from
program to program.
There are five classes of metrics generally used from a commercial perspective to measure the
quantity and quality of software. During development technical and defect metrics are used.
After market metrics are then collected which include user satisfaction, warranty, and reputation.
• Technical metrics are used to determine whether the code is well-structured, that manuals
for hardware and software use are adequate, that documentation is complete, correct, and up-
to-date. Technical metrics also describe the external characteristics of the system’s
implementation.
• Defect metrics are used to determine that the system does not erroneously process data, does
not abnormally terminate, and does not do the many other things associated with the failure
of a software-intensive system.
• End-user satisfaction metrics are used to describe the value received from using the system.
13-14
15. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
• Warranty metrics reflect specific revenues and expenditures associated with correcting
software defects on a case-by-case basis. These metrics are influenced by the level of defects,
willingness of users to come forth with complaints, and the willingness and ability of the
software developer to accommodate the user.
• Reputation metrics are used to assess perceived user satisfaction with the software and may
generate the most value, since it can strongly influence what software is acquired. Reputation
may differ significantly from actual satisfaction:
− Because individual users may use only a small fraction of the functions provided in any
software package; and
− Because marketing and advertising often influence buyer perceptions of software quality
more than actual use.
13.3.2 Software Measurement Life Cycle
Effective software measurement adds value to all life cycle phases. Figure 13-3 illustrates the
primary measures associated with each phase of development. During the requirements phase,
function points are counted. Since requirements problems are a major source of cost and schedule
overruns, an error and quality tracking system is put into effect. Once requirements are defined,
the next phase of measurement depends on whether the program is a custom development, or a
combination of newly developed applications, reusable assets, and COTS. The risks and values
of all candidate approaches are quantified and analyzed.
INITIATE ANALYZE D E F IN E N E W DESIGN DESIGN CONSTRUCT TEST AND
A E X IS T I N G SYSTEM LO G I C A L PHYSICAL THE IMPLEMENT MAINTENANCE
PROJECT SITUATION REQUIREMENTS SYSTEM SYSTEM SYSTEM SYSTEM
Project Project Post
Defect Defect Defect Defect
Assessment Management Impl .
Analysis Analysis Analysis Analysis
(Soft Factors) Evaluation Defects
MEASUREMENT
Project FP LO C Defect Maintenance
RELATED ACTIVITIES Estimate Sizing Count Analysis Measures
FP FP Complexity
Sizing Sizing Analysis
SUPPORT
ACTIVITIES
PROJECT ACCOUNTING (SCHEDULE AND EFFORT)
CONFIGURATION MANAGEMENT (DELIVERABLES)
Figure 13-3. Software Measurement Life Cycle [JONES91]
If the solution is a custom development, from the logical system design on, defect removal can
be a very costly activity. Therefore, design reviews and peer inspections are used to gather and
record defect data. The data are analyzed to determine how to prevent similar defects in the
future. During physical design, reviews and peer inspections continue to be valuable and defect
data are still collected and analyzed. The coding phase can be anything from very difficult to
almost effortless, depending upon the success of the preceding phases. Defect and quality data,
as well as code complexity measures, are recorded during code reviews and peer inspections.
13-15
16. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
One of the most important characteristics of your software development is complexity. With the
first executable architecture, throughout the design phases, and subsequently, as the code is
developed and compiled, evaluate the complexity of your proposed development. This will give
you important insight into the feasibility of bringing your program to a successful conclusion
(i.e., to provide the desired functional performance on the predicted schedule, at the estimated
cost).
The testing phase will range from a simple unit tests the programmers conduct, to a full-scale,
multi-staged formal test suite, including function tests, integration tests, stress tests, regression
tests, independent tests, field tests, system tests, and final acceptance tests. During all these
activities, in-depth defect data are collected and analyzed for use in subsequent defect prevention
activities such as changing the development process to prevent similar defects in future
developments. Retrospective analyses are performed on defect removal efficiencies for each
specific review, peer inspection, and test, and on the cumulative efficiency of the overall series of
defect removal steps.
During the maintenance phase, both user satisfaction and latent defect data are collected. For
enhancements or replacements of existing systems, the structure, complexity, and defect rates of
the existing software are determined. Further retrospective analysis of defect removal efficiency
is also be performed. A general defect removal goal is a cumulative defect removal efficiency of
95% for MIS, and 99.9% (or greater) for real-time weapon systems. This means that, when
defects are found by the development team or by the users, they are summed after the first (and
subsequent) year(s) of operations. Ideally, the development team will find and remove 95% to
100% of all latent defects. [JONES91]
13.3.3 Software Measurement Life Cycle at Loral
Table 13-5 illustrates how Loral Federal Systems collects and uses metrics throughout the software
life cycle. The measurements collected and the metrics (or in-process indicators) derived from
the measures change throughout development. At the start of a program (during the proposal
phase or shortly after contract start), detailed development plans are established. These plans
provide planned start and completion dates for each CDRL (CSU or CSCI). From these plans, a
profile is developed showing the planned percent completion per month over the life of the
program for all the software to be developed. Also at program start, a launch meeting is conducted
to orient the team to a common development process and to present lessons-learned from previous
programs.
13-16
17. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
PROGRAM MEASUREMENT IN-PROCESS INDICATORS
PHASE COLLECTION (METRICS)
At Start Establish SPD Initial plan-complete profiles
(plan star/completion for each phase)
During Design, Weekly: % plan verus % casual
Coding, Update and complete plan items
Unit Testing Inspection effectiveness (defects found
Regularly: )defects remaining);
Inspect Defect Data Defects ) KSLOC
Bi-monthly: Process improvements identified
Casual analysis meeting
Quarterly: Product quality (projected);
Update measurements Staffing profile
During Program Trouble Reports (PTRs) PTR density (PTR ) KSLOC)
Integration Open PTRs versus time
Testing Development team survey
transferred to integration and test % PTRs satisfied and survey comments
team survey
At End Document lessons learned
% satisfied and customer comments used
Customer survey for future improvements
PTRs during acceptance Product quality (actual)
Table 13-5. Collection and Use of Metrics at Loral
During software design, coding, and unit testing, many different measurements and metrics are
used. On a weekly basis, actual percent completion status is collected against the plan established
at program start. Actuals are then plotted against the plan to obtain early visibility into any
variances. The number of weeks (early or late) for each line item is also tracked to determine
where problems exist. Source statement counts (starting as estimates at program start) are updated
whenever peer inspections are performed. A plot of code change over time is then produced.
Peer inspections are conducted on a regular basis and defects are collected. Peer inspection
metrics include peer inspection efficiency (effectiveness) (percent of defects detected during
inspections versus those found later) and expected product quality [the number of defects detected
per thousand source lines-of-code (KSLOC)]. Inspection data are used to project the expected
latent defect rate (or conversely, the defect removal efficiency rate) after delivery. At the completion
of each development phase, defect causal analysis meetings are held to examine detected defects
and to establish procedures, which when implemented, will prevent similar defects from occurring
in the future.
During development, program measurements are updated periodically (either monthly or quarterly)
in the site metrics database. These measurements include data such as cost, effort, quality, risks,
and technical performance measures. Other metrics, such as planned versus actual staffing profiles,
can be derived from these data. During integration and test, trouble report data are collected.
Two key metrics are produced during this phase: the Program Trouble Report (PTR) density (the
13-17
18. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
number of defects per KSLOC) and Open PTRs over time. An internal development team survey
is transferred to the integration and test team so they can derive the team’s defect removal efficiency
and PTR satisfaction for the internal software delivery.
At the end of the program, lessons-learned for process improvement are documented for use in
future team launches. If possible, a customer survey is conducted to determine a customer
satisfaction rating for the program. Delivered product quality is measured during acceptance
testing and compared to the earlier projected quality rate. These actuals are then used to calibrate
the projection model for future and on-going programs.
13.4 Software Measurement Process
The software measurement process must be an objective, orderly method for quantifying, assessing,
adjusting, and ultimately improving the development process. Data are collected based on known,
or anticipated, development issues, concerns, and questions. They are then analyzed with respect
to the software development process and products. The measurement process is used to assess
quality, progress, and performance throughout all life cycle phases. The key components of an
effective measurement process are:
• Clearly defined software development issues and the measure (data elements) needed to
provide insight into those issues;
• Processing of collected data into graphical or tabular reports (indicators) to aid in issue analysis;
• Analysis of indicators to provide insight into development issues; and,
• Use of analysis results to implement process improvements and identify new issues and
problems.
Your normal avenue for obtaining measurement data from the contractor is via contract CDRLs.
A prudent, SEI Level 3 contractor will implement a measurement process even without government
direction. This measurement process includes collecting and receiving actual data (not just
graphs or indicators), and analyzing those data. To some extent the government program office
can also implement a measurement process independent of the contractor’s, especially if the
contractor is not sufficiently mature to collect and analyze data on his own. In any case, it is
important for the Government and the contractor to meet and discuss analysis results. Measurement
activities keep you actively involved in, and informed of, all phases of the development process.
Figure 13-4 illustrates how measurement was integrated into the organizational hierarchy at
IBM-Houston. It shows how a bottoms-up measurement process is folded into corporate activities
for achieving corporate objectives.
13-18
19. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
O bjectives
COR PORATE
VISION
Business Units MAR KET FINANCIAL
M easures
Business O perating C USTOM ER
Systems SATISFAC TION FLEXIBILIT Y PRO DU CTIVITY
Departments
CYCL E
an d QUALITY DELIVERY TIME WASTE
Work Centers
External Eff ectiveness Internal Efficiency
Figure 13-4. Organizational Measurement Hierarchy
13.4.1 Metrics Plan
NOTE: Practical Software Measurement: A Foundation for Objective Project
Management, available at www.psmsc.com, and Practical Software Measurement:
Measuring for Process Management and Improvement (CMU/SEI-97-HB-003), available
at www.sei.cmu.edu should be reviewed in conjunction with the remainder of this chapter.
For measurement to be effective, it must become an integral part of your decision-making process.
Insights gained from metrics should be merged with process knowledge gathered from other
sources in the conduct of daily program activities. It is the entire measurement process that gives
value-added to decision-making, not just the charts and reports. [ROZUM92] Without a firm
Metrics Plan, based on issue analysis, you can become overwhelmed by statistics, charts, graphs,
and briefings to the point where you have little time for anything other than ingestion. Plan well!
Not all data is worth collecting and analyzing. Once your development program is in-process,
and your development team begins to design and produce lines-of-code, the effort involved in
planning and specifying the metrics to be collected, analyzed, and reported upon begins to pay
dividends. Figure 13-5 illustrates examples of life cycle measures and their benefits collected on
the Space Shuttle program.
13-19
20. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
XX
XX X
X XX XX X
XX X X X X X
X
X XX X X X X XX
X X
PRE-PROCESS IN-PROCESS END-PROCESS
MEASURES MEASURES MEASURES
Examples: Examples: Examples:
- Skill demographics - Process defects - Product defects
- Education/training - Requirements stability - On-time delivery
- Process rating - Technical performance measures - Cycle time
- Patents/ID's - Design readiness - Cost performance
- VECP's - User satisfaction
Primary Value:
Primary Value: - Real-time decision making Primary Value:
- Preventative - Corporate history
- Strategic - Lessons-learned
Figure 13-5. Space Shuttle Life Cycle Measurements
The ground rules for a Metrics Usage Plan are that:
• Metrics must be understandable to be useful. For example, lines-of-code and function
points are the most common, accepted measures of software size with which software engineers
are most familiar.
• Metrics must be economical. Metrics must be available as a natural by-product of the work
itself and integral to the software development process. Studies indicate that approximately
5% to 10% of total software development costs can be spent on metrics. The larger the
software program, the more valuable the investment in metrics becomes. Therefore, do not
waste programmer time by requiring specialty data collection that interferes with the coding
task. Look for tools which can collect most data on an unintrusive basis.
• Metrics must be field tested. Beware of software contractors who offer metrics programs
that appear to have a sound theoretical basis, but have not had practical application or
evaluation. Make sure proposed metrics have been successfully used on other programs or
are prototyped before accepting them.
• Metrics must be highly leveraged. You are looking for data about the software development
process that permit management to make significant improvements. Metrics that show
deviations of .005% should be relegated to the trivia bin.
• Metrics must be timely. Metrics must be available in time to effect change in the development
process. If a measurement is not available until the program is in deep trouble it has no value.
• Metrics must give proper incentives for process improvement. High scoring teams are
driven to improve performance when trends of increasing improvement and past successes
are quantified. Conversely, metrics data should be used very carefully during contractor
performance reviews. A poor performance review, based on metrics data, can lead to negative
government/industry working relationships. Do not use metrics to judge team or individual
performance.
13-20
21. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
• Metrics must be evenly spaced throughout all phases of development. Effective measurement
adds value to all life cycle activities. [JONES91]
• Metrics must be useful at multiple levels. They must be meaningful to both management
and technical team members for process improvement in all facets of development.
13.4.2 Activities That Constitute a Measurement Process
Measurement can be expressed in terms of a fairly simple process. Simple is not to be confused
with easy. There are innumerable challenges to the successful implementation of an effective
measurement process. The following is a recommended measurement process logically grouped
by activity type (planning, implementation, and evaluation). During measurement planning,
information needs drive the selection of indicators. Indicators and analysis methods drive the
selection of specific measures. The specific measures drive the selection of collection processes.
During measurement implementation, the collection process provides measurement data that is
analyzed and formed into indicators that are used to satisfy information needs. The evaluation
activity is where metrics are judged to determine if they are providing the information required
for decision making and process improvement.
13.4.2.1 Planning
There are four activities to measurement planning (discussed below). The results of these four
activities are documented in the Measurement Plan.
13.4.2.1.1 Define Information Needs
The need for information is derived from goals and objectives. These goals and objectives may
come from project planning, project monitoring, process compliance, pilot process improvements,
and/or risk mitigation, etc. There are technical needs, project needs, managerial needs, and
organizational needs.
13.4.2.1.2 Define Indicators and Analysis Methods to Address Information Needs
Once the information needs have been prioritized, an appropriate number of indicators are selected.
Indicators are derived from the analysis of “two or more sets of measurement data”. Once the
indicators are selected, the appropriate analysis methods are defined to derive indicators from
the measurement data.
It is important during the initial growth of a measurement program that information needs be
limited. This is a change management issue. Do not overwhelm the measurement staff. On the
other hand, a little advanced planning will serve to develop measurements useful as organizations
mature.
13.4.2.1.3 Define the Selected Measures
The selected indicators and analysis methods will drive the selection of specific measures. Once
the measures are selected, the details of the measure must be documented. This includes
definitions, terms, data requirements, security requirements, data types etc.
13-21
22. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
Once data collection has commenced, it is important that there is consistency in data definitions.
Changing a definition in midstream during data collection produces variations in data trends that
can skew the analysis of performance, quality, and related issues. If definitions do change through
process-knowledge, it is critically important that you understand each change and how these
changes will affect already collected data. [Changes in entry/exit definitions should be reflected
in an updated SDP.]
13.4.2.1.4 Metrics Selection
Marciniak and Reifer proclaim that: “Software projects don’t get into trouble all at once; instead
they get into trouble a little at a time.” [MARCINIAK90] Metrics must be selected to ascertain
your program’s trouble threshold at the earliest phase of development. Basic measures should
be tailored by defining and collecting data that address those trouble (risk) areas identified in the
Risk Management Plan. A rule of thumb for metrics is that they must provide insight into areas
needing process improvement! Which metrics to use depends on the maturity level of the
organization. Table 13-6, based on the Army’s STEP metrics process, illustrates how metrics are
used to obtain program knowledge. [See Army policy on Preparation for Implementing Army
Software Test and Evaluation Panel (STEP) Metrics Recommendations, Volume 2, Appendix
B.]
13-22
23. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
METRIC OBJECTIVE
Schedule Track progress versus schedule
Cost Track software expenditures
Computer resource Track planned versus actual size
utilization
Software engineering Rate contractor environment
environment
Design stability Rate stability of software design and planned release or block. Design
stability = [(total modules+deleted module+modules with design
changes)/total modules]. Design thresholds: >=0.9 to >0.85 (warning), <=0.85
(alarm)
Requirements Track rqmts to code by showing % of software rqmts traceable to developer
traceability specifications, coding items (CI), CSCI, or CSC. Traceability thresholds are:
<=99% to <99.5% (warning), <99.5% (alarm)
Requirements stability Track changes to requirements
Fault profiles Track open versus closed anomalies
Complexity Assess code quality
Breadth testing Extent to which rqmts are tested. 4 measures are: coverage (ration or rqmts
tested to total rqmts); test success (ratio of rqmts passed to rqmts tested);
overall success (ratio rqmts passed to total rqmts); deferred (total deferred
rqmts).
Overall success thresholds: >99% (satisfactory), <99% to <95% (warning),
<=95% (alarm)
Depth of testing Track testing of code
Reliability Monitor potential downtime due to software
Table 13-6. How Metrics Are Used for Program Management
ATTENTION! Metrics selection must focus on those areas you have identified as sources
of significant risk for your program.
NOTE: Examples of minimum measures include: quarterly collation and analysis of
size (counting source statements and/or function/feature points), effort (counting staff
hours by task and nature of work; e.g., hours for peer inspection), schedule (software
events over time), software quality (problems, failures, and faults), rework (SLOC changed
or abandoned), requirements traceability (percent of requirements traced to design, code,
and test), complexity (quality of code), and breadth of testing (degree of code testing).
Contact the Global Transportation Network (GTN) Program for a copy of the GTN Software
Development Metrics Guidelines which discusses the metrics selected and gives examples of
how to understand and interpret them. See Volume 2, Appendix A for information on how to
contact the GTN program office.
13-23
24. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
13.4.3 Typical Software Measurements and Metrics
A comprehensive list of industry metrics is available for software engineering management use,
ranging from high-level effort and software size measures to detailed requirements measures and
personnel information. Quality, not quantity, should be the guiding factor in selecting metrics. It
is best to choose a small, meaningful set of metrics that have solid baselines in a similar
environment. A typical set of metrics might include:
• Quality,
• Size,
• Complexity,
• Requirements,
• Effort,
• Productivity,
• Cost and schedule,
• Scrap and rework, and
• Support.
Some industry sources of historical data are those listed in Volume 2, Appendix A. A good
source of information on optional software metrics and complexity measures are the SEI technical
reports listed in Volume 2, Appendix D. Also see Rome Laboratory report, RL-TR-94-146,
Framework Implementation Guidebook, for a discussion on software quality indicators. Another
must-read reference is the Army Software Test and Evaluation Panel (STEP) Software Metrics
Initiatives Report, USAMSAA, May 6, 1992 and policy memorandum, Preparation for
Implementing Army Software Test and Evaluation Panel (STEP) Metrics Recommendations,
Volume 2, Appendix B.
13.4.3.1 Quality
Measuring product quality is difficult for a number of reasons. One reason is the lack of a precise
definition for quality. Quality can be defined as the degree of excellence that is measurable in
your product. The IEEE definition for software quality is the degree to which a system, component,
or process meets:
• Specified requirements
• Customer or user needs or expectations. [IEEE90]
Quality is in the eye of the user! For some programs, product quality might be defined as reliability
[i.e., a low failure density (rate)], while on others maintainability is the requirement for a quality
product. [MARCINIAK90] Your definition of a quality product must be based on measurable
quality attributes that satisfy your program’s specified user requirements. Because requirements
differ among programs, quality attributes will also vary. [MARCINIAK90]
13-24
25. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
13.4.3.2 Size
Software size is indicative of the effort required. Software size metrics have been discussed
above as SLOC, function points, and feature points.
13.4.3.3 Complexity
Complexity measures focus on designs and actual code. They assume there is a direct correlation
between design complexity and design errors, and code complexity and latent defects. By
recognizing the properties of each that correlate to their complexity, we can identify those high-
risk applications that either should be revised or subjected to additional testing.
Those software properties which correlate to how complex it is are size, interfaces among modules
(usually measured as fan-in, the number of modules invoking a given application, or fan-out, the
number of modules invoked by a given application), and structure (the number of paths within a
module). Complexity metrics help determine the number and type of tests needed to cover the
design (interfaces or calls) or coded logic (branches and statements).
There are several accepted methods for measuring complexity, most of which can be calculated
by using automated tools.
NOTE: See Appendix M, Software Complexity, for a discussion on complexity analysis.
13.4.4 Requirements
Requirements changes are a major source of software size risk. If not controlled and baselined,
requirements will continue to grow, increasing cost, schedule, and fielded defects. If requirements
evolve as the software evolves, it is next to impossible to develop a successful product. Software
developers find themselves shooting at a moving target and throwing away design and code
faster than they can crank it out [scrap and rework is discussed below]. Colonel Robert Lyons,
Jr., former co-leader of the F-22 System Program Office Avionics Group, cites an “undisciplined
requirements baselining process” as the number one cause of “recurring weapons system
problems.” An undisciplined requirements process is characterized by:
• Inadequate requirements definition,
• Late requirements clarification,
• Derived requirements changes,
• Requirements creep, and
• Requirements baselined late. [LYONS91]
Once requirements have been defined, analyzed, and written into the System Requirements
Specification (SRS), they must be tracked throughout subsequent phases of development. In a
system of any size this is a major undertaking. The design process translates user-specified (or
explicit) requirements into derived (or implicit) requirements necessary for the solution to be
turned into code. This multiplies requirements by a factor of sometimes hundreds. [GLASS92]
13-25
26. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
Each implicit requirement must be fulfilled, traced back to an explicit requirement, and addressed
in design and test planning. It is the job of the configuration manager to guarantee the final
system meets original user requirements. As the requirements for a software solution evolve
into a design, there is a snowball effect when converting original requirements into design
requirements needed to convert the design into code. Conversely, sometimes requirements are
not flowed down and get lost during the development process (dropped through the developmental
crack) with a resulting loss in system performance or function. When requirements are not
adequately tracked, interface data elements can disappear, or extra interface requirements can be
introduced. Missing requirements may not become apparent until system integration testing,
where the cost to correct this problem is exponentially high.
13.4.5 Effort
In the 1970s, Rome Air Development Center (RADC) collected data on a diverse set of over 500
DoD programs. The programs ranged from large (millions of lines-of-code and thousands of
months of effort) to very small (a one month effort). The data was sparse and rather primitive but
it did include output KLOC and input effort months and duration. At that time most program
managers viewed effort and duration as interchangeable. When we wanted to cut completion
time in half, we assigned twice as many people! Lessons-learned, hard knocks, and program
measures such as the RADC database, indicated that the relationships between duration and
effort were quite complex, nonlinear functions. Many empirical studies over the years have
shown that manpower in large developments builds up in a characteristic way and that it is a
complex power function of software size and duration. Many estimation models were introduced,
the best known of which is Barry Boehm’s COnstructive COst MOdel (COCOMO). [HETZEL93]
13.4.6 Productivity
Software productivity is measured in the number of lines-of-code or function/feature points
delivered (i.e., SLOC that have been produced, tested, and documented) per staff month that
result in an acceptable and usable system.
Boehm explains there are three basic ways to improve software development productivity.
• Reduce the cost-driver multipliers,
• Reduce the amount of code; and,
• Reduce the scalable factor that relates the number of instructions to the number of man
months or dollars.
His model for measuring productivity is:
Productivity = Size/Effort where
Effort = Constant x Size Sigma x Multipliers [BOEHM81]
In the equation for effort, multipliers are factors such as efficiency of support tools, whether the
software must perform within limited hardware constraints, personnel experience and skills, etc.
Figure 13-6 lists the various cost drivers that affect software development costs. Many of these
13-26
27. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
factors (not all) can be modified by effective management practices. The weight of each factor as
a cost multiplier (on a scale of 1 to 4.18, with larger numbers having the greater weight) reflects
the relative effect that factor has on total development costs. Boehm’s studies show that “employing
the right people” has the greatest influence on productivity. (Reliability and complexity are also
important multipliers.)
1 .2 0 Language experience
1 .2 3 S ch e d u l e c o n s traint
1 .2 3 Data base size
1 .3 2
Turnaround time
1 .3 4
Virtual machine experience
Software cost driver attribute
1 .4 9
V irtual m achine volatility
1 .4 9 S o f tw a r e T o o ls
1 .5 1 M odern programming practices
1 .5 6
S torage c onstraint
1 .5 7 Applications experience
1 .6 6 Tim ing constraint
1 .8 7 R e q u i r e d r e lia b i l i t y
2 .3 6 P ro d u c t c o m p l e x ity
4 .1 8
P e r s o n n e l /team capability
1 .0 0 1 .5 0 2 .0 0 2 .5 0 3 .0 0 3 .5 0 4 .0 0
S o f tw a r e p r o d u c t i v i t y r a n g e
Figure 13-6. Software Productivity Factors (Weighted Cost Multipliers) [BOEHM81]
The third element in the equation, the exponent Sigma, is 1.2 for embedded software in Boehm’s
COCOMO model.
NOTE: COCOMO II, an updated version of COCOMO that allows methodologies other
than the classic waterfall (such as object oriented, iterative, incremental), is available.
For large aerospace systems the value amounts to a fairly costly exponent. When you double the
size of software, you multiply the cost by 21.2, which is 2.3. In other words, the cost is doubled,
plus a 30% penalty for size. The size penalty, according to Boehm, results from inefficiency
influences that are a product of size. The bigger the software development, the bigger the
integration issues, the bigger the team you need, the less efficient they become.
“You bring a bigger team on board. The people on the team spend more time talking to each
other. There is more learning curve effect as you bring in people who don’t know what you are
building. There is a lot of thrashing in the process any time there is a proposed change of things
that haven’t been determined that people are trying to determine. Any time you do have a change,
there are ripple effects that are more inefficient on big programs than on small programs.”
[BOEHM89]
13-27
28. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
13.4.7 Cost and Schedule
As previously discussed, the driving factors in DoD software development have always been
cost, schedule, or both. A typical DoD scenario has been for the software development schedule
to be accelerated to support the overall program schedule, increasing the cost of the software and
reducing quality. Because cost is the key issue in any development program, it must be reviewed
carefully as part of the program review and approval process. As Benjamin Franklin explained:
“I conceive that the great part of the miseries of mankind are brought upon them by the false
estimates they have made of the value of things.” — Benjamin Franklin [FRANKLIN33]
To avoid the miseries of a runaway program, you must carefully plan for and control the cost and
schedule of your software development effort. These measures are important for determining
and justifying the required funding, to determine if a specific proposal is reasonable, and to
insure that the software development schedule is consistent with the overall system schedule.
Cost measures should also be used to evaluate whether the developer has the appropriate mix
and quantity of assigned staff, and to develop a reasonable program cost and schedule baseline
for effective and meaningful program management.
While size is by far the most significant driver of cost and schedule, other factors impact them as
well. These factors are usually more qualitative in nature and address the development and
operational environments as well as the software’s characteristics. Most software cost estimating
models use these factors to determine environmental and complexity factors which are in turn
used in computations to calculate effort and cost.
Multi-source cost and schedule estimation is the use of multiple, independent organizations,
techniques, and models to estimate cost and schedule, including analysis and iteration of the
differences between estimates. Whenever possible, multiple sources should be used for estimating
any unknowns, not just cost and schedule. Errors or omissions in estimates can often be identified
by comparing one with another. Comparative estimates also provide a sounder set of “should-
costs” upon which to control software development. As with size estimates, assessment from
alternate sources (such as program office software technical staff, prime or subcontractors, or
professional consulting firms) is advisable for cost and schedule. Reassessments throughout the
program life cycle improve the quality of estimates as requirements become better understood
and refined. The following summarizes the resources you should consider when costing software
development.
• Human resources. This includes the number and qualifications of the people required, as
well as their functional specialties. Boehm asserts that human resources are the most significant
cost drivers on a software development effort. [BOEHM81] Development personnel skills
and experience (reflected in their productivity) have the greatest effect on cost and schedule.
• Hardware resources. This includes development (host) and target computers, and compilers.
Hardware resources used to be major cost drivers when development personnel needed to
share equipment with multiple constituencies. Now that virtually everyone has a PC or
workstation on his or her desk, the issue is whether the target computer significantly differs
from the development computer. For instance, if the target machine is an air or spaceborne
system, the actual CPU may be technology-driven and not usable for all required development
activities.
13-28
29. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
• Software resources. Software is also used as a tool to develop other software. CASE tools
needed for development, test, and code generation must be considered. Your toolset might
include: business systems planning tools, program management tools, support tools, analysis
and design tools, programming tools, integration and test tools, prototyping and simulation
tools, maintenance tools, cost/schedule estimating tools, and architectural tools.
• Reusable resources. Reusable assets are a valuable resource that must be considered in
determining your cost requirements. This includes the assets you will develop for future
reuse by other programs, as well as searching the reuse repositories for existing code that can
be integrated into your development. Reusable assets will have significant impact on your
program cost and schedule.
Schedule measurements track the contractor’s performance towards meeting commitments, dates,
and milestones. Milestone performance metrics give you a graphical portrayal (data plots and
graphs) of program activities and planned delivery dates. It is essential that what constitutes
progress slippage and revisions is understood and agreed upon by both the developer and the
Government. Therefore, entry and exit criteria for each event or activity must be agreed upon at
contract award. A caution in interpreting schedule metrics is to keep in mind that many activities
occur simultaneously. Slips in one or more activities usually impact on others. Look for problems
in process and never, never sacrifice quality for schedule!
13.4.8 Scrap and Rework
A major factor in both software development cost and schedule is that which is either scrapped
or reworked. The costs of conformance are the normal costs of preventing defects or other
conditions that may result in the scrapping or reworking of the software. The costs of
nonconformance are those costs associated with redoing a task due to the introduction of an
error, defect, or failure on initial execution (including costs associated with fixing failures that
occur after the system is operational, i.e., scrap and rework cost).
NOTE: Good planning requires consideration of the “rework cycle.” For iterative
development efforts, rework can account for the majority of program work content and
cost!
Rework costs are very high. Boehm’s data suggest rework costs are about 40% of all software
development expenditures. Defects that result in rework are one of the most significant sources
of risk in terms of cost, delays, and performance. You must encourage and demand that your
software developer effectively measures and controls defects. Rework risk can be controlled by:
• Implementing procedures to identify defects as early as possible;
• Examining the root causes of defects and introducing process improvements to reduce or
eliminate future defects; and
• Developing incentives that reward contractors/developers for early and comprehensive defect
detection and removal.
There are currently no cost estimating models available that calculate this substantial cost factor.
However, program managers must measure and collect the costs associated with software scrap
and rework throughout development. First, it makes good sense to monitor and track the cost of
13-29
30. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
defects, and thereby to incentivize closer attention to front-end planning, design, and other defect
preventive measures. Second, by collecting these costs across all software development programs,
parametric models can be designed to better help us plan for and assess the acquisition costs
associated with this significant problem.
NOTE: See Volume 2, Appendix O, “Swords & Plowshares; The Rework Cycles of Defense
and Commercial Software Development Projects.”
13.4.9 Support
Software supportability progress can be measured by tracking certain key supportability
characteristics. With these measures, both the developer and the acquirer obtain knowledge
which can be focused to control supportability.
• Memory size. This metric tracks spare memory over time. The spare memory percentage
should not go below the specification requirement.
• Input/output. This metric tracks the amount of spare I/O capacity as a function of time. The
capacity should not go below the specification requirement.
• Throughput. This metric tracks the amount of throughput capacity as a function of time.
The capacity should not go below specification requirements.
• Average module size. This metric tracks the average module size as a function of time. The
module size should not exceed the specification requirement.
• Module complexity. This metric tracks the average complexity figure over time. The average
complexity should not exceed the specification requirement.
• Error rate. This metric tracks the number of errors compared to number of errors corrected
over time. The difference between the two is the number of errors still open over time. This
metric can be used as a value for tested software reliability in the environment for which it
was designed.
• Supportability. This metric tracks the average time required to correct a deficiency over
time. The measure should either remain constant or the average time should decrease. A
decreasing average time indicates supportability improvement.
• Lines-of-code changed. This metric tracks the average lines-of-code changed per deficiency
corrected when measured over time. The number should remain constant to show the
complexity is not increasing and that ease of change is not being degraded.
13.4.9.1 Define the Collection Process of the Measurement Data
From the definition of the specific measures, collection process requirements can be identified.
These requirements are used to guide the development of the measurement data collection process.
13.4.9.2 Implementation
Measurement process implementation is controlled by the measurement plan. The implementation
activities are discussed below.
13-30
31. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
13.4.9.3 Collect the Measurement Data
This activity is simply the execution of the measurement data collection process.
As stated above, metrics are representations of the software and the software development process
that produces them — the more mature the software development process, the more advanced
the metrics process. A well-managed process, with a well-defined data collection effort embedded
within it, provides better data and more reliable metrics. Accurate data collection is the basis of
a good metrics process. Figure 13-7 illustrates the variety of software quality metrics and
management indicators that were collected and tracked for all F-22 weapon systems functions.
# of Requirements
# of Interfaces
# of Units Coded
# of Open S/W Problems
# of Tested Units
Manpower Loading
Figure 13-7. F-22 Data Collection for Software Development Tracking
To summarize, data collection must be woven into the developer’s process. For instance, count
the number of software units to be built by looking at the Software Design Description (SDD)
and limit the use of out-of-range data. Also, avoid collecting data that are derived, ill-defined, or
cannot be traced directly back to the process. As the development progresses, new tools or data
collection techniques may emerge. If new data collection methods are employed, the data
collection efforts must be tailored to these techniques. In addition, as the data collection changes,
your understanding of what those data mean must also change. [ROZUM92]
13.4.9.4 Analyze the Measurement Data to Derive Indicators
Indicators are derived from the analysis performed on the measurement data. The quality of the
indicator is tied to the rigor of the analysis process.
Both objective and subjective measures are important to consider when assessing the current
state of your program. Objective data consists of actual item counts (e.g., staff hours, SLOC,
function points, components, test items, units coded, changes, or errors) that can be independently
verified. Objective data are collected through a formal data collection process. Subjective data
are based on an individual’s (or group’s) feeling or understanding of a certain characteristic or
13-31
32. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
condition (e.g., level of problem difficulty, degree of new technology involved, stability of
requirements). Objective and subjective data together serve as a system of checks and balances
throughout the life cycle. If you are a resourceful manager, you will depend on both to get an
accurate picture of your program’s health. Subjective data provide critical information for
interpreting and validating objective data; while objective data provide true counts that may
cause you to question your subjective understanding and investigate further.
Analysis of the collected data must determine which issues are being addressed, and if new
issues have emerged. Before making decisions and taking action from the data, you must
thoroughly understand what the metrics mean. To understand the data, you must:
• Use multiple sources to validate the accuracy of your data and to determine differences and
causes in seemingly identical sets of data. For instance, when counting software defects by
severity, spot check actual problem reports to make sure the definition of severity levels is
being followed and properly recorded.
• Study the lower-level data collection process, understand what the data represent, and how
they were measured.
• Separate collected data and their related issues from program issues. There will be issues
about the data themselves (sometimes negating the use of certain data items). However, do
not get bogged down in data issues. You should concentrate on program issues (and the data
items in which you have confidence) to provide the desired insight.
• Do not assume data from different sources (e.g., from SQA or subcontractors) are based on
the same definitions, even if predefined definitions have been established. You must re-
verify definitions and identify any variations or differences in data from outside sources when
using them for comparisons.
• Realize development processes and products are dynamic and subject to change. Periodic
reassessment of your metrics program guarantees that it evolves. Metrics are only meaningful
if they provide insight into your current list of prioritized issues and risks.
The availability of program metrics is of little or no value unless you also have access to models
(or norms) that represent what is expected. The metrics are used to collect historical data and
experience. As their use and program input increases, they are used to generate databases of
information that become increasingly more accurate and meaningful. Using information extracted
from these databases, you are able to gauge whether measurement trends in your program differ
from similar past programs and from expected models of optimum program performance within
your software domain. Databases often contain key characteristics upon which models of
performance are designed. Cost data usually reflect measures of effort. Process data usually
reflect information about the programs (such as methodology, tools, and techniques used) and
information about personnel experience and training. Product data include size, change, and
defect information and the results of statistical analyses of delivered code.
Figure 13-8 illustrates the possibilities for useful comparison by using metrics, based on available
program histories. By using models based on completed software developments, the initiation
and revision of your current plans and estimates will be based on “informed data.” As you
gather performance data on your program, you should compare your values with those for related
programs in historical databases. The comparisons in Figure 13-8 should be viewed collectively,
as one component of a feedback-and-control system that leads to revisions in your management
plan. To execute your revised plans, you must make improvements in your development process
which will produce adjusted measures for the next round of comparisons.
13-32
33. Chapter 13: Software Estimation, Measurement & Metrics GSAM Version 3.0
Current
Project
Metrics
COMPARE WITH COMPARE WITH
ORGANIZATIONAL PROJECT PLAN
MEMORY (EXPECTATIONS)
ASSESS
PROGRAM
Project HEALTH
Current
Histories Project
and Plan
Models
Revised
Current Plan and
Subjective
Data Project
Adjustment
Figure 13-8. Management Process with Metrics
13.4.9.5 Manage the Measurement Data and Indicators
The measurement data and the indicators must be properly managed according to the requirements
identified in the measurement plan.
13.4.9.6 Report the Indicators
Once the indicators are derived, they are made available to all those affected by the indicators or
by the decisions made because of the indicators.
13.4.9.7 Evaluation
The value of any measurement process is in direct proportion to the value the indicators have to
the users of the indicators.
13.4.9.8 Review Usability of Indicators
Initially, the selection of indicators, analysis methods, and specific measurement data may be a
‘best guess’ whether or not they meet the information needs specified. Over time, through a
review of the usefulness of the indicators, the selection can be refined such that there is a high
correlation between the indicators selected and the information needs. This will be an iterative
process.
13-33