1) The document discusses various metrics for measuring software productivity, quality, and testing at different stages of the software development process. It covers metrics for requirements, design, source code, testing, maintenance, and processes.
2) Key metrics discussed include defects per KLOC, function points, defect removal efficiency, and size-oriented vs function-oriented metrics. Metrics for object-oriented design, user interfaces, and project management are also summarized.
3) Collecting measures and developing metrics at each stage of development allows indicators to be obtained and improvements to be made to both the product and process. A variety of productivity, quality, and testing metrics can provide meaningful insights.
4. 4
• How do we measure the progress of testing?
• When do we release the software?
• Why do we devote more time and resources to testing a
particular module?
• What is the reliability of software at the time of release?
• Who is responsible for the selection of a poor test suite?
• How many faults do we expect during testing?
• How much time and resources are required for software
testing?
• How do we know the effectiveness of a test suite?
5. 5
• Sometimes testing techniques are confused with testing objectives.
• Testing techniques can be viewed as aids that help to ensure the
achievement of test objectives
• To avoid such misunderstandings, a clear distinction
should be made between test-related measures
that provides an evaluation of the program under test,
based on the observed test outputs and the measures
that evaluate the thoroughness of the test set.
6. 6
SOFTWARE QUALITY
Factors affecting the quality :
• Correctness: the software is working properly & correctly
(no error)
• Maintainability: easy to maintain; mean time to change
(MTTC) is small
• Integrity: resist interference; good level of security
• Usability: easy to use
7. 7
Software testing metrics may help us to measure and
quantify many things which may help us find some answers
to such important questions
9. 9
MEASURE, MEASUREMENT, AND METRICS
• A measure provides a quantitative indication of
the extent, amount, dimension, capacity, or size
of some attributes of a product or process.
• Measurement is the act of determining a
measure.
• The metric is a quantitative measure of the
degree to which a product or process possesses a
given attribute
10. 10
ILLUSTRATION
• A measure is the number of failures experienced during testing.
• Measurement is the way of recording such failures.
• A software metric may be an average number of failures experienced
per hour during testing.
11. 11
MEASURE
• A measure has been established when a single
data point has been collected (e.g., the number of
errors uncovered within a single software
component),.
• Measure quality attributes for software such as
testability, complexity, reliability, maintainability,
efficiency, portability, enhanced ability, usability,
etc.
• Similarly, we may also like to measure size, effort,
development time, and resources for software.
12. 12
MEASUREMENT
It can be defined as :
• The process by which numbers or
symbols are assigned to attributes
of entities in the real world in such a
way as to describe them according
to clearly defined rules
13. 13
METRICS
It can be defined as :
• The continuous application of measurement-based techniques to the
software development process and its products to supply meaningful
and timely management information, together with the use of those
techniques to improve that process and its products
• Ejiogu defines effective software metrics as a set of
attributes that should be encompassed.
• Software metrics are related to measures that, in turn,
involve numbers for quantification.
• These numbers are used to produce a better product
and improve its related process.
14. 14
METRICS (cont.)
• The metrics should be relative:
• Easy to learn how to derive the metric,
• Its computation should not demand inordinate effort or time.
• Always yield results that are unambiguous.
• The mathematical computation of the metric should use measures
that do not lead to bizarre combinations of units.
• Should be based on the requirements model, the design model, or
the structure of the program itself
• Should not be dependent on the vagaries of programming language
syntax or semantics
• The metric should provide information that can lead to a higher-
quality end product
15. 15
INDICATOR
• A software engineer collects measures and
develops metrics so that indicators will be
obtained.
• An indicator is a metric or combination of metrics
that provide insight into the software process, a
software project, or the product itself.
16. 16
Key performance indicators (KPIs) are metrics that
are used to track performance and trigger
remedial actions when their values fall in a
predetermined range.
But how do you know that metrics
are meaningful in the first place?
17. 17
SOFTWARE ANALYTICS
It can be defined as :
• The systematic computational analysis of software engineering data or
statistics to provide managers and software engineers with meaningful
insights and empower their teams to make better decisions
• Analytics can help developers predict the number of defects to
expect, where to test for them, and how much time it will take to fix
them.
• This allows managers and developers to create incremental schedules
that use these predictions to determine expected completion times.
18. 18
SOFTWARE ANALYTICS (cont.)
Buse and Zimmermann suggest that analytics can help developers
make decisions regarding:
• Targeted testing: To help focus regression testing and integration
testing resources
• Targeted refactoring: To help make strategic decisions on how to
avoid large technical debt costs
• Release planning: To help ensure that market needs, as well as
technical features in software products, are taken into account
• Understanding customers: To help developers get actionable
information on product use by customers in the field during product
engineering
• Judging stability: To help managers and developers monitor the state
of the evolving prototype and anticipate future maintenance needs
• Targeting inspection: To help teams determine the value of individual
inspection activities, their frequency, and their scope
20. 20
PRODUCT METRICS
• These metrics provide information about the testing
status of a software product.
• The data for such metrics are also generated during
testing and may help us to know the quality of the
product.
• Some of the basic metrics are given as:
• Time interval between failures
• Number of failures experienced in a time interval
• Cumulative failures experienced up to a specified time
• Time of failure
• Estimated time for testing
• Actual testing time
21. 21
PRODUCT METRICS (cont.)
• With the basic metrics before, we may find some additional metrics
as given below:
• % of time spent =
𝐴𝑐𝑡𝑢𝑎𝑙 𝑡𝑖𝑚𝑒 𝑠𝑝𝑒𝑛𝑡
𝐸𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑 𝑡𝑒𝑠𝑡𝑖𝑛𝑔 𝑡𝑖𝑚𝑒
x 100
• Average time interval between failures
• Maximum and minimum failures experienced in any time interval
• Average number of failures experienced in time intervals
• Time remaining to complete the testing
22. 22
METRICS FOR THE REQUIREMENTS MODEL
• These estimation metrics examine the requirements model with the
intent of predicting the “size” of the resultant system.
• Size is sometimes (but not always) an indicator of design complexity
and is almost always an indicator of increased coding, integration, and
testing effort.
• By measuring the characteristics of the requirements
model, it is possible to gain quantitative insight
into its specificity and completeness.
23. THE CHARACTERISTICS
Conventional Software.
• Specificity (lack of ambiguity),
completeness, correctness,
understandability, verifiability, internal and
external consistency, achievability,
concision, traceability, modifiability,
precision, and reusability.
• Electronically stored; executable or at least
interpretable; annotated by relative
importance, and stable, versioned,
organized, cross-referenced, and specified
at the right level of detail
Mobile Software.
• Number of static screen displays.
• Number of dynamic screen displays.
• Number of persistent data objects.
• Number of external systems interfaced.
• Number of static content objects.
• Number of dynamic content objects.
• Number of executable functions.
24. 24
DESIGN METRICS FOR
CONVENTIONAL SOFTWARE
• Architectural design metrics focus on
characteristics of the program
architecture with an emphasis on the
architectural structure and the
effectiveness of modules or components
within the architecture.
• Software design complexity measures:
structural complexity, data complexity,
and system complexity.
25. 25
DESIGN METRICS FOR
OBJECT-ORIENTED SOFTWARE
Nine distinct and measurable characteristics of an OO design:
• Size is defined by taking a static count of OO entities such as classes or operations, coupled with the depth
of an inheritance tree.
• Complexity is defined in terms of structural characteristics by examining how classes of an OO design are
interrelated to one another.
• Coupling is measured by counting physical connections between elements of the OO design.
• Sufficiency is “the degree to which an abstraction [class] possesses the features required of it . . .”.
• Completeness determines whether a class delivers the set of properties that fully reflect the needs of the
problem domain.
• Cohesion is determined by examining whether all operations work together to achieve a single, well-
defined purpose.
• Primitiveness is the degree to which an operation is atomic—that is, the operation cannot be constructed
out of a sequence of other operations contained within a class.
• Similarity determines the degree to which two or more classes are similar in terms of their structure,
function, behavior, or purpose.
• Volatility measures the likelihood that a change will occur.
26. 26
DESIGN METRICS FOR
USER INTERFACE
Suggested Interface
Metric
Description
Layout appropriateness The relative position of entities within the interface
Layout complexity Number of distinct regions defined for an interface
Layout region complexity Average number of distinct links per region
Recognition complexity Average number of distinct items the user must look at before making a navigation or data
input decision
Recognition time Average time (in seconds) that it takes a user to select the appropriate action for a given task
Typing effort Average number of keystrokes required for a specific function
Mouse pick effort Average number of mouse picks per function
Selection complexity Average number of links that can be selected per page
Content acquisition time Average number of words of text per Web page
Memory load Average number of distinct data items that the user must remember to achieve a specific
objective
27. 27
DESIGN METRICS FOR
USER INTERFACE (CONT.)
Suggested Graphic Design
Metrics
Description
Word count Total number of words that appear on a page
Body text percentage Percentage of words that are body versus display text (e.g., headers)
Emphasized body text percentage The portion of body text that is emphasized
Text positioning count Changes in text position from flush left
Text cluster count Text areas highlighted with color, bordered regions, rules, or lists
Link count Total links on a page
Page size Total bytes for the page as well as elements, graphics, and style sheets
Graphic percentage Percentage of page bytes that are for graphics
Graphics count Total graphics on a page (not including graphics specified in scripts, applets, and objects)
Color count Total colors employed
Font count Total fonts employed (i.e., face + size + bold + italic)
28. 28
DESIGN METRICS FOR
USER INTERFACE (CONT.)
Suggested Content
Metrics
Description
Page wait Average time required for a page to download at different connection speeds
Page complexity Average number of different types of media used on page, not including text
Graphic complexity Average number of graphics media per page
Audio complexity Average number of audio media per page
Video complexity Average number of video media per page
Animation complexity Average number of animations per page
Scanned image
complexity
Average number of scanned images per page
29. 29
DESIGN METRICS FOR
USER INTERFACE (CONT.)
Suggested Navigation
Metrics
Description
Page-linking complexity Number of links per page
Connectivity Total number of internal links, not including dynamically generated links
Connectivity density Connectivity divided by page count
30. 30
DESIGN METRICS FOR SOURCE CODE
The measures are:
• n1 = number of distinct operators that appear in a program
• n2 = number of distinct operands that appear in a program
• N1 = total number of operator occurrences
• N2 = total number of operand occurrences
The length N can be estimated as
• N = n1 log2 n1 + n2 log2 n2
The program volume may be defined as
• V = N log2 (n1 + n2)
31. 31
METRICS FOR TESTING
Testing metrics fall into two broad categories:
(1) metric that attempts to predict the likely number of tests required at various testing levels,
and
(2) metrics that focus on test coverage for a given component.
The metrics consider aspects of encapsulation and inheritance
(1) Lack of cohesion in methods (LCOM).
(2) Percent public and protected (PAP).
(3) Public access to data members (PAD).
(4) Number of root classes (NOR).
(5) Fan-in (FIN).
(6) Number of children (NOC)
(7) Depth of the inheritance tree (DIT).
32. 32
METRICS FOR MAINTENANCE
• IEEE Std. 982.1-2005 suggests a software maturity index (SMI) that
provides an indication of the stability of a software product (based on
changes that occur for each release of the product).
• The following information is determined:
• MT = number of modules in the current release
• Fc = number of modules in the current release that have been changed
• Fa = number of modules in the current release that have been added
• Fd = number of modules from the preceding release that were deleted in the current release
• The software maturity index is computed in the following manner:
• SMI =
𝑀𝑇
− (𝐹𝑎 + 𝐹𝑐 + 𝐹𝑑)
𝑀𝑇
34. 34
PROCESS METRICS
• Process metrics are intended to improve the software process so
that errors are uncovered in the most effective manner
• Project metrics enable a software project manager to
(1) Assess the status of an ongoing project,
(2) Track potential risks,
(3) Uncover problem areas before they go “critical,”
(4) Adjust workflow or tasks, and
(5) Evaluate the project team’s ability to control the quality of software work products
35. 35
DETERMINANTS FOR SOFTWARE QUALITY
AND ORGANIZATIONAL EFFECTIVENESS
• Referring to the Figure, the process sits at the
center of a triangle connecting three factors that
have a profound influence on software quality and
organizational performance.
• The skill and motivation of people have been
shown to be the most influential factors in quality
and performance.
• The complexity of the product can have a
substantial impact on quality and team
performance.
• The technology (i.e., the software engineering
methods and tools) that populate the process also
has an impact.
36. 36
SOFTWARE MEASUREMENT
1. Size-oriented software metrics:
derived by normalizing quality and/or
productivity measures by considering
the size of the software that has been
produced.
2. Function-oriented software metrics:
use a measure of the functionality
delivered by the application as a
normalization value. The computation
of a function-oriented metric is based
on characteristics of the software’s
information domain and complexity.
37. 37
SIZE-ORIENTED SOFTWARE
METRICS
• Normalize the quality and productivity by
measuring the size of the software (LOC/KLOC), so
that we get:
• Errors per KLOC
• Defects per KLOC
• Rupiah (Rp) per LOC
• Documentation pages per KLOC
• Other metrics can be derived:
• Errors per person-month
• LOC per person-month
• Rupiah (Rp) per documentation page
38. 38
FUNCTION-ORIENTED
SOFTWARE METRICS
• Measure indirectly.
• Emphasis on program functionality & utility.
• It was first proposed by Albrecht, with a proposed
method of calculation called: FUNCTION POINT (FP)
• FP is derived by using empirical relationships based on
something measurable from the information domain
and related to the complexity of the software.
39. 39
FUNCTION POINTS
Information Domain:
• Number of user inputs: the number of user inputs required by the
application
• Number of outputs for users: the sum of all outputs both reports and
errors (to the printer/screen)
• Inquiry input number: online input resulting in the online output
• Number of files
• Number of external interfaces: all interfaces that are machine-
readable to transfer information to other systems.
40. 40
METRICS FOR SOFTWARE QUALITY
• A quality metric that provides benefit at both the project and process
level is defect removal efficiency (DRE).
• In essence, DRE is a measure of the filtering ability of quality
assurance and control actions as they are applied throughout all
process framework activities.
• When considered for a project as a whole, DRE is defined in the
following manner: 𝐷𝑅𝐸 =
𝐸
𝐸+𝐷
where E is the number of errors found before delivery of the software to the end user and D is
the number of defects found after delivery.
41. UNITING OF METRICS IN SOFTWARE
PROCESS
software
engineering
process
software
project
software
product
data
collection
data
collection
data
collection
measures
metrics
indicators
42. References
Chopra, R. (2018). Software Testing: Principles and Practices.
Mercury Learning & Information.
02
Majchrzak, T. A. (2012). Improving Software Testing: Technical And
Organizational Developments. Springer Science & Business Media.
03
Myers, G. J., Sandler, C., & Badgett, T. (2012). The Art Of Software
Testing. John Wiley & Sons.
04
Dooley, J. F., & Dooley. (2017). Software Development, Design and
Coding. Apress.
01
Singh, Y. (2011). Software Testing. Cambridge University Press
05