This document outlines the IEEE standard for a software quality metrics methodology. It describes the purpose of establishing software quality metrics which is to quantify quality factors and ensure quality requirements are met. It provides steps for developing a software quality metrics program which include determining quality requirements, selecting metrics, collecting and storing data, validating metrics, and revalidating metrics when the environment changes. The goal is to use metrics to predict and measure software quality in order to identify issues and ensure compliance with quality standards.
2. IEEE Standard for a Software Quality
Metrics Methodology
Sponsor
Software Engineering Standards Committee of the IEEE
Computer Society
Approved 8 December 1998
IEEE-SA Standards Board
6. 1.2 Determine the list of quality requirements:
Survey all involved parties
Create the list of quality requirements
1.3 Quantify each quality factor:
For each quality factor, assign one or more direct metrics to
represent the quality factor
assign direct metric values to serve as quantitative
requirements for that quality factor
7.
8.
9. Identify tools
Describe data storage procedures
Establish a traceability matrix
Identify the organizational entities
Participate in data collection
Responsible for monitoring data collection
Describe the training and experience required for data
collection
Training process for personnel involved
10.
11. Test the data collection and metrics computation procedures
on selected software that will act as a prototype
Select samples that are similar to the project(s) on which the
metrics will be used
Examine the cost of the measurement process for the
prototype to verify or improve the cost analysis
Results collected from the prototype to improve the metric
descriptions and descriptions of data items
12. Using the formats in Table, collect and store data in the
project metrics database at the appropriate time in the life
cycle
Check the data for accuracy and proper unit of measure
Monitor the data collection
Check for uniformity of data if more than one person is
collecting it
Compute the metric values from the collected data
13.
14. Interpret and record the results
Analyze the differences between the collected metric data and
the target values
Investigate significant differences
15. Interpret and record the results
Unacceptable quality may be manifested as
excessive complexity,
inadequate documentation,
lack of traceability,
or other undesirable attributes
16. Use validated metrics during development to make predictions
of direct metric values
Make predictions for software components and process steps
Analyze in detail software components and process steps
whose predicted direct metric values deviate from the target
values
17. Use direct metrics to ensure compliance of software products
with quality requirements during system and acceptance
testing
Use direct metrics for software components and process steps.
Compare these metric values with target values of the direct
metrics
Classify software components and process steps whose
measurements deviate from the target values as noncompliant
18.
19. The purpose of metrics validation is to identify both product
and process metrics that can predict specified quality factor
values, which are quantitative representations of quality
requirements
Metrics shall indicate whether quality requirements have
been achieved or are likely to be achieved in the future
20. For the purpose of assessing whether a metric is valid
The following thresholds shall be designated:
V-square of the linear correlation coefficient
B-rank correlation coefficient
A-prediction error
@-confidence level
P-success rate
22. 5.3.1 Identify the quality factors sample
A sample of quality factors shall be drawn from the metrics
database
5.3.2 Identify the metrics sample
A sample from the same domain (e.g., same software
components), as used in 5.3.1, shall be drawn from the metrics
database
23. 5.3.3 Perform a statistical analysis
The analysis described in 5.2 shall be performed
Before a metric is used to evaluate the quality of a product or
process, it shall be validated against the criteria described in
5.2. If a metric does not pass all of the validity tests, it shall
only be used according to the criteria prescribed by those tests
5.3.4 Document the results
Documented results shall include the direct metric, predictive
metric, validation criteria, and numerical results, as a minimum
24. 5.3.5 Revalidate the metrics
A validated metric may not necessarily be valid in other
environments or future applications. Therefore, a predictive
metric shall be revalidated before it is used for another
environment or application
5.3.6 Evaluate the stability of the environment
Metrics validation shall be undertaken in a stable development
environment (i.e., where the design language, implementation
language, or project development tools do not change over the
life of the project in which validation is performed)