Software Metrics
 A software metric is a measure of software characteristics which are measurable or countable.
Software metrics are valuable for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.
 A metric is a measurement of the level at which any impute belongs to a system product or process.
 Within the software development process, many metrics are that are all connected.
 Software metrics are a quantifiable or countable assessment of the attributes of a software product.
There are 4 functions related to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving
Software Metrics
Characteristics of software Metrics
1. Quantitative: Metrics must possess a quantitative nature. It means metrics can be
expressed in numerical values.
2. Understandable: Metric computation should be easily understood, and the method of
computing metrics should be clearly defined.
3. Applicability: Metrics should be applicable in the initial phases of the development of
the software.
4. Repeatable: When measured repeatedly, the metric values should be the same and
consistent.
5. Economical: The computation of metrics should be economical.
6. Language Independent: Metrics should not depend on any programming language.
Software Metrics
Types of Software Metrics
Software Metrics
Types of Software Metrics
Product Metrics: Product metrics are used to evaluate the state of the product, tracing risks and undercover prospective
problem areas. The ability of the team to control quality is evaluated. Examples include lines of code, cyclomatic
complexity, code coverage, defect density, and code maintainability index.
Process Metrics: Process metrics pay particular attention to enhancing the long-term process of the team or
organization. These metrics are used to optimize the development process and maintenance activities of software.
Examples include effort variance, schedule variance, defect injection rate, and lead time.
Project Metrics: The project metrics describes the characteristic and execution of a project. Examples include effort
estimation accuracy, schedule deviation, cost variance, and productivity. Usually measures-
• Number of software developer
• Staffing patterns over the life cycle of software
• Cost and schedule
• Productivity
Software Metrics
How are metrics used ?
Measures are often collected by software engineers and used by software
managers.
Measurements are analysed and compared to past measurements, for similar
projects, to look for trends (good and bad) and to make better estimates.
Measures, Metrics, and Indicators
These three terms are often used interchangeably, but they can have subtle differences.
Measure
Provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of a product or
process
Example: Number of defects found in component testing. LOC of each component.
Measurement
The act of determining a measure
Example: Collecting the defect counts. Counting LOC.
Metric
(IEEE) A quantitative measure of the degree to which a system, component, or process possesses a given attribute
Example: defects found in component testing/LOC of code tested.
Indicator
A metric or combination of metrics that provides insight into the software process, a software project, or the product
itself. Indicators are used to manage the process or project.
Purpose of Product Metrics
Metrics Principles
Before we introduce a series of product metrics that
1. Assist in the evaluation of the analysis and design models,
2. 2. Provide an indication of the complexity of procedural designs and source code, and
3. 3. Facilitate the design of more effective testing, it is important for you to understand
basic measurement principles
Activities of a Measurement Process
 Formulation
The derivation (i.e., identification) of software measures and metrics appropriate for
the representation of the software that is being considered
 Collection
he mechanism used to accumulate data required to derive the formulated metrics
 Analysis
The computation of metrics and the application of mathematical tools
 Interpretation
The evaluation of metrics in an effort to gain insight into the quality of the
representation
 Feedback
Recommendations derived from the interpretation of product metrics and passed on
to the software development team.
Reasons for measuring Software
processes, products, and resources
To characterize
To gain understanding of Product, Process, and
To establish baseline for future comparisons
To evaluate
To determine status within the plan
To predicate
So that we can plan. Update estimates
To improve
We would have more information “quantitative” to help determine root causes
Metrics guidelines
• Use common sense and organizational sensitivity when interpreting
metrics data.
• Provide regular feedback to the individuals and teams who collect
measures and metrics.
• Do not use metrics to appraise individuals.
• Set clear goals and metrics that will be used to achieve them.
• Never use metrics to threaten individuals and teams.
• Metrics that indicate a problem area should not be considered a
problem, but an indicator for process improvement.
• Do not obsess on a single metric.
Metrics for Analysis model
In software engineering, metrics for the analysis model help in evaluating the quality and complexity of the
analysis artifacts, such as requirements, use cases, and the conceptual model. These metrics ensure that the
analysis phase produces a robust foundation for subsequent development activities.
Technical work in software engineering begins with the creation of the analysis model. It is at this stage that
requirements are derived and that a foundation for design is established. Therefore, technical metrics that
provide insight into the quality of the analysis model are desirable. Although relatively few analysis and
specification metrics have appeared in the literature, it is possible to adapt metrics derived for project application
for use in this context. These metrics examine the analysis model with the intent of predicting the “size” of the
resultant system. It is likely that size and design complexity will be directly correlated.
Functionality delivered
Provides an indirect measure of the functionality that is packaged within the software
System size
Measures the overall size of the system defined in terms of information available as part of the analysis model
Specification quality
Provides an indication of the specificity and completeness of a requirements specification
Function points, information domain values
and value Adjustment Factors
Function Points (FPs)
Function points are a standard unit of measure that represents the
functional size of a software application. They are calculated based on the
functionality delivered to the user, as described in the requirements. The
calculation of function points involves identifying and weighting various
components of the Information Domain
In the context of the analysis model in software engineering, function
points, Information Domain Values (IDVs), and Value Adjustment Factors
(VAFs) are critical elements used to quantify the functionality and
complexity of software systems
Function points, information domain values
and value Adjustment Factors
Information Domain Values (IDVs)
Information Domain Values are the key elements that contribute to the functionality of the software. These elements
are categorized and weighted based on their complexity. The five key IDVs are:
1. External Inputs (EI):
• User data or control inputs that update internal logical files.
• Weighting: Simple (3 FP), Average (4 FP), Complex (6 FP).
2. External Outputs (EO):
• User data or control outputs derived from internal logical files.
• Weighting: Simple (4 FP), Average (5 FP), Complex (7 FP).
3. External Inquiries (EQ):
• User inquiries that retrieve data without modifying the internal logical files.
• Weighting: Simple (3 FP), Average (4 FP), Complex (6 FP).
4. Internal Logical Files (ILF):
• Logical groups of user data or control information maintained within the system.
• Weighting: Simple (7 FP), Average (10 FP), Complex (15 FP).
5. External Interface Files (EIF):
• Logical groups of user data or control information referenced by the system but maintained by another system.
• Weighting: Simple (5 FP), Average (7 FP), Complex (10 FP).
Function points, information domain values
and value Adjustment Factors
Steps to Calculate Function Points
• Identify and classify each component of the information domain.
• Assign weights to each component based on its complexity (simple, average,
complex).
• Calculate the Unadjusted Function Points (UFP) by summing up the weighted
values of all components.
Unadjusted Function Points (UFP)
The sum of all function points calculated from the information domain components
without considering environmental factors.
UFP = ∑ (count × weight)
Function points, information domain values
and value Adjustment Factors
Value Adjustment Factors (VAF)
Value Adjustment Factors adjust the UFP to account for various technical and environmental considerations that impact the
development and operation of the software. VAF is calculated using 14 General System Characteristics (GSCs), each rated on a
scale from 0 (no influence) to 5 (strong influence).
General System Characteristics (GSCs)
1) Data communications
2) Distributed data processing
3) Performance objectives
4) Heavily used configuration
5) Transaction rate
6) On-line data entry
7) End-user efficiency
8) On-line update
9) Complex processing
10)Reusability
11)Installation ease
12)Operational ease
13)Multiple sites
14)Facilitate change
Function points, information domain values
and value Adjustment Factors
Calculating VAF
1. Sum the ratings of all 14 GSCs to get the Total Degree of Influence (TDI).
TDI = ∑ (rating of each GSC)
2. Calculate VAF using the following formula:
VAF = 0.65 + (0.01 × TDI)
Adjusted Function Points (AFP)
The final adjusted function points are obtained by multiplying the UFP by the VAF.
AFP = UFP × VAF
Function points, information domain values
and value Adjustment Factors
Example Scenario
Consider a simple software system with the following components:
• 10 External Inputs (6 simple, 3 average, 1 complex)
• 7 External Outputs (4 simple, 2 average, 1 complex)
• 5 External Inquiries (3 simple, 1 average, 1 complex)
• 2 External Interface Files (1 simple, 1 average)
• 3 Internal Logical Files (1 simple, 1 average, 1 complex)
Function points, information domain values
and value Adjustment Factors
Step - 1: Calculate UFP
External Inputs: (6 × 3) + (3 × 4) + (1 × 6) = 18 + 12 + 6 = 36
External Outputs: (4 × 4) + (2 × 5) + (1 × 7) = 16 + 10 + 7 = 33
External Inquiries: (3 × 3) + (1 × 4) + (1 × 6) = 9 + 4 + 6 = 19
Internal Logical Files: (1 × 7) + (1 × 10) + (1 × 15) = 7 + 10 + 15 = 32
External Interface Files: (1 × 5) + (1 × 7) = 5 + 7 = 12
UFP = 36 + 33 + 19 + 32 + 12 = 132
Function points, information domain values
and value Adjustment Factors
Step - 2: Calculate VAF
Assume the GSCs have been rated and the TDI is calculated as 25.
VAF = 0.65 + (0.01 × 25) = 0.65 + 0.25 = 0.90
Step - 3: Calculate AFP
AFP = 132 × 0.90 = 118.8
The adjusted function points for this software system are 118.8, which provides
a quantitative measure of the software's functional size and complexity.
Metrics for the
Design Model
Architectural Design Metrics
Architectural design metrics in software engineering are quantitative measures used to assess
the quality and effectiveness of the software architecture. These metrics help in evaluating
the architectural design's attributes such as modularity, maintainability, reusability,
performance, and scalability.
Architectural Design Metrics
Metrics for Object-Oriented Design
In a detailed treatment of software metrics for OO systems, Whitmire describes nine distinct and
measurable characteristics of an OO design:
Size: Size is defined in terms of four views: population, volume, length, and functionality.
• Population is measured by taking a static count of OO entities such as classes or operations.
• Volume measures are identical to population measures but are collected dynamically—at a given
instant of time.
• Length is a measure of a chain of interconnected design elements (e.g., the depth of an inheritance
tree is a measure of length).
• Functionality metrics provide an indirect indication of the value delivered to the customer by an OO
application.
In reality, technical metrics for OO systems can be applied not only to the design model, but also the
analysis model. In the sections that follow, we explore metrics that provide an indication of quality at the
OO class level and the operation level. In addition, metrics applicable for project management and testing
are also explored
Metrics for Object-Oriented Design
Complexity: Like size, there are many differing views of software complexity . Whitmire views complexity in
terms of structural characteristics by examining how classes of an OO design are interrelated to one another.
Coupling: The physical connections between elements of the OO design (e.g., the number of collaborations
between classes or the number of messages passed between objects) represent coupling within an OO system.
Sufficiency: Whitmire defines sufficiency as “the degree to which an abstraction possesses the features required
of it, or the degree to which a design component possesses features in its abstraction, from the point of view of
the current application.” Stated another way, we ask: “What properties does this abstraction (class) need to
possess to be useful to me?” . In essence, a design component (e.g., a class) is sufficient if it fully reflects all
properties of the application domain object that it is modeling— that is, that the abstraction (class) possesses the
features required of it.
Completeness: The only difference between completeness and sufficiency is “the feature set against which we
compare the abstraction or design component.” Sufficiency compares the abstraction from the point of view of
the current application. Completeness considers multiple points of view, asking the question: “What properties
are required to fully represent the problem domain object?” Because the criterion for completeness considers
different points of view, it has an indirect implication about the degree to which the abstraction or design
component can be reused.
Metrics for Object-Oriented Design
Cohesion: Like its counterpart in conventional software, an OO component should be designed in a
manner that has all operations working together to achieve a single, well-defined purpose. The
cohesiveness of a class is determined by examining the degree to which “the set of properties it
possesses is part of the problem or design domain”.
Primitiveness: A characteristic that is similar to simplicity, primitiveness (applied to both operations
and classes) is the degree to which an operation is atomic—that is, the operation cannot be constructed
out of a sequence of other operations contained within a class. A class that exhibits a high degree of
primitiveness encapsulates only primitive operations.
Similarity: The degree to which two or more classes are similar in terms of their structure, function,
behavior, or purpose is indicated by this measure.
Volatility: As we have seen earlier in this book, design changes can occur when requirements are
modified or when modifications occur in other parts of an application, resulting in mandatory
adaptation of the design component in question. Volatility of an OO design component measures the
likelihood that a change will occur.
Class-Oriented Metrics
Chidamber and Kemerer (CK) Metrics Suite
WMC: weighted methods per class (Σ complexity of methods)(low)
• Amount of effort required to implement and test a class
• Complexity of inheritance tree.
• Application specific and limiting potential reuse
DIT: depth of the inheritance tree (Max length node to root) (low)
• Lower level classes inherit many methods. „
• Difficulties when attempting to predict the behavior of a class.
• Greater design complexity.
• Positive: many methods may be reused
NOC: number of children: ( subclasses that are immediately subordinate to a class)
• High: reuse, the amount of test
Class-Oriented Metrics
CBO: coupling between object classes (No. of coll. in CRC) (low)
• High: reusability will decrease, complicated testing and modifications
• This is consistent with the general guideline to reduce coupling. to reduce coupling.
RFC: response for a class (No. of methods in response set) (low) „
• High: Testing and overall design complexity will increase.
LCOM: lack of cohesion in methods LCOM :
No of methods that access one or more of the same attributes
• High: Complexity of class design, class might be better designed by breaking it into two or
more separate classes.
• It is desirable to keep cohesion high; that is keep LCOM low. It is desirable to keep cohesion
high; that is keep LCOM low.
MOOD Metrics Set
Harrison, Counsell, and Nithi propose a set of metrics for object-oriented design
that provide quantitative indicators for OO design characteristics. A sampling of
MOOD metrics follows:
 Method inheritance factor (MIF).
 Coupling factor (CF)
MOOD Metrics Set
Method inheritance factor (MIF)
The degree to which the class architecture of an OO system makes use of inheritance for both methods
(operations) and attributes is defined as
MIF = Mi(Ci)/ Ma(Ci)
where the summation occurs over i = 1 to TC. TC is defined as the total number of classes in the
architecture, Ci is a class within the architecture, and
Ma(Ci) = Md(Ci) + Mi(Ci)
where
Ma(Ci) = the number of methods that can be invoked in association with Ci.
Md(Ci) ) = the number of methods declared in the class Ci.
Mi(Ci) = the number of methods inherited (and not overridden) in Ci.
The value of MIF (the attribute inheritance factor, AIF, is defined in an analogous manner) provides an
indication of the impact of inheritance on the OO software.
MOOD Metrics Set
Coupling factor (CF)
Earlier in this chapter we noted that coupling is an indication of the connections between elements of
the OO design. The MOOD metrics suite defines coupling in the following way:
CF = Σi Σj is_client (Ci, Cj)]/(TC2 -TC)
where the summations occur over i = 1 to TC and j = 1 to TC. The function is_client = 1, if and only if
a relationship exists between the client class, Cc, and the server class, Cs, and Cc ≠ Cs
= 0, otherwise
Although many factors affect software complexity, understandability, and maintainability, it is
reasonable to conclude that, as the value for CF increases, the complexity of the OO software will also
increase and understandability, maintainability, and the potential for reuse may suffer as a result.
Metrics For Source Code
Halstead proposed the first analytical "laws" for computer software. Software science assigns quantitative laws to the
development of computer software, using a set of primitive measures that may be derived after code is generated or estimated
once design is complete. These follow:
n1 = the number of distinct operators that appear in a program.
n2 = the number of distinct operands that appear in a program.
N1 = the total number of operator occurrences.
N2 = the total number of operand occurrences.
Halstead uses these primitive measures to develop expressions for the overall program length, potential minimum volume for
an algorithm, the actual volume (number of bits required to specify a program), the program level (a measure of software
complexity), the language level (a constant for a given language), and other features such as development effort,
development time, and even the projected number of faults in the software.
Halstead shows that length N can be estimated : N = n1 log 2 n1 + n2 log 2 n2
and program volume may be defined : V = N log 2 (n1 + n2)
It should be noted that V will vary with programming language and represents the volume of information (in bits) required to
specify a program. Theoretically, a minimum volume must exist for a particular algorithm. Halstead defines a volume ratio L
as the ratio of volume of the most compact form of a program to the volume of the actual program. In actuality, L must
always be less than 1. In terms of primitive measures, the volume ratio may be expressed as L = 2/n1 x n2/N2
Metrics For Testing
Testing effort can also be estimated using metrics derived from Halstead measures. Using the
definitions for program volume, V, and program level, PL, software science effort, e, can be computed
as
PL = 1/[(n1/2)•(N2/n2)] (1)
e = V/PL (2)
The percentage of overall testing effort to be allocated to a module k can be estimated using the
following relationship:
percentage of testing effort (k) = e(k)/ e(i) (3)
where e(k) is computed for module k using Equations (1) and (2) and the summation in the denominator
of Equation (4) is the sum of software science effort across all modules of the system.
Metrics For Object Oriented Testing
The design metrics provide an indication of design quality. They also provide a general
indication of the amount of testing effort required to exercise an OO system.
Binder suggests a broad array of design metrics that have a direct influence on the
“testability” of an OO system. The metrics are organized into categories that reflect
important design characteristics.
Encapsulation
Lack of cohesion in methods (LCOM):The higher the value of LCOM, the more states
must be tested to ensure that methods do not generate side effects.
Percent public and protected (PAP): Public attributes are inherited from other classes and
therefore visible to those classes. Protected attributes are a specialization and private to a
specific subclass. This metric indicates the percentage of class attributes that are public.
High values for PAP increase the likelihood of side effects among classes. Tests must be
designed to ensure that such side effects are uncovered.
Metrics For Object Oriented Testing
Public access to data members (PAD): This metric indicates the number of classes (or methods)
that can access another class’s attributes, a violation of encapsulation. High values for PAD lead to
the potential for side effects among classes. Tests must be designed to ensure that such side effects
are uncovered.
Inheritance
Number of root classes (NOR): This metric is a count of the distinct class hierarchies that are
described in the design model. Test suites for each root class and the corresponding class hierarchy
must be developed. As NOR increases, testing effort also increases.
Fan-in (FIN): When used in the OO context, fan-in is an indication of multiple inheritance. FIN > 1
indicates that a class inherits its attributes and operations from more than one root class. FIN > 1
should be avoided when possible.
Number of children (NOC) and depth of the inheritance tree (DIT): Superclass methods will
have to be retested for each subclass.
Metrics For Maintenance
IEEE Std. 982.1-1988 suggests a software maturity index (SMI) that provides an indication of the
stability of a software product (based on changes that occur for each release of the product). The
following information is determined:
MT = the number of modules in the current release
Fc = the number of modules in the current release that have been changed
Fa = the number of modules in the current release that have been added
Fd = the number of modules from the preceding release that were deleted in the current release
The software maturity index is computed in the following manner:
SMI = [MT (Fa + Fc + Fd)]/MT
As SMI approaches 1.0, the product begins to stabilize. SMI may also be used as metric for planning
software maintenance activities. The mean time to produce a release of a software product can be
correlated with SMI and empirical models for maintenance effort can be developed
Software Measurement
Measurement enables us to gain insight into the process and
the project by providing a mechanism for objective
evaluation.
• Basic quality and productivity data are collected. These
data are then analyzed, compared against past averages,
and assessed to determine whether quality and
productivity improvements have occurred.
• Metrics are also used to pinpoint problem areas so that
remedies can be developed and the software process can
be improved.
Size Oriented Metrics
Object Oriented Metrics
Function Oriented Metrics
Use Case Oriented Metrics
Web Engineering Project Metrics
Size Oriented Metrics
Size-oriented metrics are derived by normalizing quality and productivity Point Metrics
measures by considering the size of the software that has been produced. The organization
builds a simple record of size measure for the software projects. It is built on past
experiences of organizations. It is a direct measure of software. This metric measure is one
of the simplest and earliest metrics that is used for computer programs to measure size.
Size Oriented Metrics are also used for measuring and comparing the productivity of
programmers. It is a direct measure of a Software. The size measurement is based on lines
of code computation. The lines of code are defined as one line of text in a source file.
While counting lines of code, the simplest standard is:
• Don’t count blank lines
• Don’t count comments
• Count everything else
• The size-oriented measure is not a universally accepted method.
Size Oriented Metrics
A simple set of size measures that can be developed is given below:
Size = Kilo Lines of Code (KLOC)
Effort = Person / month
Productivity = KLOC / person-month
Quality = Number of faults / KLOC
Cost = $ / KLOC
Documentation = Pages of documentation / KLOC
Example of Size-Oriented Metrics
For a size oriented metrics, software organization maintains records in tabular form.
The typical table entries are: Project Name, LOC, Efforts, Pages of documents, Errors,
Defects, Total number of people working on it.
Size Oriented Metrics
Conclusion
In conclusion, size-oriented measures are a useful tool for making software because they are easy to
use, standardised, and can be used to estimate. They do, however, have some problems, such as being
dependent on the computer language and possibly having trouble with early-stage estimates.
Companies can make better choices and improve their software development processes if they know
these measures and how to use them.
Function Oriented Metrics
Function-Oriented Metrics is a method that is developed by Albrecht in 1979 for IBM (International
Business Machine). He simply suggested a measure known as Function points that are derived using an
empirical relationship that is based on countable measures of software’s information or requirements
domain and assessments of the complexity of software.
Function-Oriented Metrics are also known as Function Point Model. This model generally focuses on
the functionality of the software application being delivered. These methods are actually independent of
the programming language that is being used in software applications and based on calculating the
Function Point (FP). A function point is a unit of measurement that measures the business functionality
provided by the business product.
To determine whether or not a particular entry is simple, easy, average, or complex, a criterion is
needed and should be developed by the organization. With the help of observations or experiments, the
different weighing factors should be determined as shown below in the table. With the help of these
tables, the count table can be computed.
Function Oriented Metrics
Function Oriented Metrics
The software complexity can be computed by answering the following questions :
• Does the system need reliable backup and recovery?
• Are data communications required?
• Are there distribute processing functions?
• Is the performance of the system critical?
• Can the system be able to run in an existing, heavily, and largely utilized operational environment?
• Does the system require on-line data entry?
• Does the input transaction is required by the on-line data entry to be built over multiple screens or
operations?
• Are the master files updated on-line?
• Are the inputs, outputs, files, or inquiries complex?
• Is the internal processing complex?
• Is the code which is designed to be reusable?
• Are conversion and installation included in the design?
• Is the system designed for multiple installations in various organizations whenever required?
• Is the application designed to facilitate or make the change and provide effective ease of use by the
user?
Object Oriented Metrics
• Number of scenario scripts (NSS): The number of scenario scripts or use-cases is directly proportional
to the number of classes required to meet requirements; the number of states for each class; and the
number of methods, attributes, and collaborations. NSS is a strong indicator of program size.
• Number of key classes (NKC): A key class focuses directly on the business domain for the problem and
will have a lower probability of being implemented via reuse. For this reason, high values for NKC
indicate substantial development work. Lorenz and Kidd suggest that between 20 and 40 percent of all
classes in a typical OO system are key classes. The remainder support infrastructure (GUI,
communications, databases, etc.).
• Number of support classes: Support classes are required to implement the system but are not
immediately related to the problem domain. Examples might be user interface (UI) classes, database
access and manipulation classes, and computation classes. In addition, support classes can be developed
for each of the key classes. Support classes are defined iteratively throughout an evolutionary process.
The number of support classes is an indication of the amount of effort required to develop the software
and also an indication of the potential amount of reuse to be applied during system development
The following metrics can provide insight w.r.t object Oriented Metrics:
Object Oriented Metrics
• Average number of support classes per key class. In general, key classes are
known early in the project. Support classes are defined throughout. If the average
number of support classes per key class were known for a given problem domain,
estimating (based on total number of classes) would be greatly simplified.
• Number of subsystems (NSUB). The number of subsystems provides insight into
resource allocation, scheduling (with particular emphasis on parallel development)
andoverall integration effort.
Use Case Oriented Metrics
• Use cases are used widely as a method for describing customer-level or business domain
requirements that imply software features and functions.
• It would seem reasonable to use the use case as a normalization measure similar to LOC or
FP. Like FP, the use case is defined early in the software process, allowing it to be used for
estimation before significant modeling and construction activities are initiated.
• Use cases describe (indirectly, at least) user-visible functions and features that are basic
requirements for a system. The use case is independent of programming language. In
addition, the number of use cases is directly proportional to the size of the application in
LOC and to the number of test cases that will have to be designed to fully exercise the
application.
• Researchers have suggested use-case points (UCPs) as a mechanism for estimating project
effort and other characteristics. The UCP is a function of the number of actors and
transactions implied by the use-case models and is analogous to the FP in some ways.
Web Engineering Project Metrics
The objective of all WebApp projects is to deliver a combination of content and functionality to the
end user. Measures and metrics used for traditional software engineering projects are difficult to
translate directly to Web Apps. Yet, it is possible to develop a database that
allows assess to internal productivity and quality. measures derived over a number of projects.
Among the measures that can be collected are mentioned here.
• Number of static Web pages. These pages represent low relative complexity and generally
require less effort to construct than dynamic pages. This measure provides an indication of the
overall size of the application and the effort required to develop it.
• Number of dynamic Web pages. These pages represent higher relative complexity and require
more effort to construct than static pages. This measure provides an indication of the overall size
of the application and the effort required to develop it.
• Number of internal page links. This measure provides an indication of the degree of
architectural coupling within the WebApp. As the number of page links increases, the effort
expended on navigational design and construction also increases.
Web Engineering Project Metrics
• Number of persistent data objects. As the number of persistent data objects (e.g., a
database or data fi le) grows, the complexity of the WebApp also grows and the effort to
implement it increases proportionally.
• Number of external systems interfaced. As the requirement for interfacing grows,
system complexity and development effort also increase.
• Number of static content objects. These objects represent low relative complexity and
generally require less effort to construct than dynamic pages.
• Number of dynamic content objects. These objects represent higher relative
complexity and require more effort to construct than static pages.
• Number of executable functions. As the number of executable functions (e.g., a script
or applet) increases, modeling and construction effort also increase.
Metrics for Software Quality
Metrics for Software Quality
• Must use technical measures to evaluate quality in objective, rather than subjective ways.
• Must evaluate quality as the project progresses.
• The primary thrust is to measure errors and defects → metrics provide indication of the effectiveness
software quality assurance and control activities.
Measuring Quality
Correctness: defects per KLOC
Maintainability: the ease that a program can be corrected, adapted, and enhanced. Time/cost.
• Time-oriented metrics: Mean-time-to-change (MTTC)
• Cost-oriented metrics: Spoilage – cost to correct defects encountered
Integrity: ability to withstand attacks
• Threat: the probability that an attack of a specific type will occur within a given time.
• Security: the probability that the attack of a specific type will be repelled.
Integrity = sum [(1 – threat)x(1 – security)]
Usability: attempt to quantify “user-friendliness” in terms of four characteristics:
1) The physical/intellectual skill to learn the system
2) The time required to become moderately efficient in the use of the system
3) The net increase of productivity
4) A subjective assessment of user attitude toward the system(e.g., use of questionnaire).
Defect Removal Efficiency
Defect Removal Efficiency (DRE) is the litmus test of your software testing prowess, encapsulating the
effectiveness of your defect identification and resolution endeavors. Think of DRE as a magnifying glass that
zooms in on the accuracy and thoroughness of your testing process.
The Defect Removal Efficiency (DRE) formula acts as a compass in determining and quantifying the quality of
your software product.
The formula for computing the defect removal efficiency is:
DRE (%) = [Total Defects Found in Testing / (Total Defects Found in Testing + Total Defects Found in Production)] x 100
Where:
Total Defects Found in Testing: The number of defects discovered during the testing phase of software
development.
Total Defects Found in Production: The number of defects reported by users or detected after the software
has been released to the production environment.

Comprehensive Analysis of Metrics in Software Engineering for Enhanced Project Management and Quality Assessment

  • 1.
    Software Metrics  Asoftware metric is a measure of software characteristics which are measurable or countable. Software metrics are valuable for many reasons, including measuring software performance, planning work items, measuring productivity, and many other uses.  A metric is a measurement of the level at which any impute belongs to a system product or process.  Within the software development process, many metrics are that are all connected.  Software metrics are a quantifiable or countable assessment of the attributes of a software product. There are 4 functions related to software metrics: 1. Planning 2. Organizing 3. Controlling 4. Improving
  • 2.
    Software Metrics Characteristics ofsoftware Metrics 1. Quantitative: Metrics must possess a quantitative nature. It means metrics can be expressed in numerical values. 2. Understandable: Metric computation should be easily understood, and the method of computing metrics should be clearly defined. 3. Applicability: Metrics should be applicable in the initial phases of the development of the software. 4. Repeatable: When measured repeatedly, the metric values should be the same and consistent. 5. Economical: The computation of metrics should be economical. 6. Language Independent: Metrics should not depend on any programming language.
  • 3.
    Software Metrics Types ofSoftware Metrics
  • 4.
    Software Metrics Types ofSoftware Metrics Product Metrics: Product metrics are used to evaluate the state of the product, tracing risks and undercover prospective problem areas. The ability of the team to control quality is evaluated. Examples include lines of code, cyclomatic complexity, code coverage, defect density, and code maintainability index. Process Metrics: Process metrics pay particular attention to enhancing the long-term process of the team or organization. These metrics are used to optimize the development process and maintenance activities of software. Examples include effort variance, schedule variance, defect injection rate, and lead time. Project Metrics: The project metrics describes the characteristic and execution of a project. Examples include effort estimation accuracy, schedule deviation, cost variance, and productivity. Usually measures- • Number of software developer • Staffing patterns over the life cycle of software • Cost and schedule • Productivity
  • 5.
    Software Metrics How aremetrics used ? Measures are often collected by software engineers and used by software managers. Measurements are analysed and compared to past measurements, for similar projects, to look for trends (good and bad) and to make better estimates.
  • 6.
    Measures, Metrics, andIndicators These three terms are often used interchangeably, but they can have subtle differences. Measure Provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of a product or process Example: Number of defects found in component testing. LOC of each component. Measurement The act of determining a measure Example: Collecting the defect counts. Counting LOC. Metric (IEEE) A quantitative measure of the degree to which a system, component, or process possesses a given attribute Example: defects found in component testing/LOC of code tested. Indicator A metric or combination of metrics that provides insight into the software process, a software project, or the product itself. Indicators are used to manage the process or project.
  • 7.
    Purpose of ProductMetrics Metrics Principles Before we introduce a series of product metrics that 1. Assist in the evaluation of the analysis and design models, 2. 2. Provide an indication of the complexity of procedural designs and source code, and 3. 3. Facilitate the design of more effective testing, it is important for you to understand basic measurement principles
  • 8.
    Activities of aMeasurement Process  Formulation The derivation (i.e., identification) of software measures and metrics appropriate for the representation of the software that is being considered  Collection he mechanism used to accumulate data required to derive the formulated metrics  Analysis The computation of metrics and the application of mathematical tools  Interpretation The evaluation of metrics in an effort to gain insight into the quality of the representation  Feedback Recommendations derived from the interpretation of product metrics and passed on to the software development team.
  • 9.
    Reasons for measuringSoftware processes, products, and resources To characterize To gain understanding of Product, Process, and To establish baseline for future comparisons To evaluate To determine status within the plan To predicate So that we can plan. Update estimates To improve We would have more information “quantitative” to help determine root causes
  • 10.
    Metrics guidelines • Usecommon sense and organizational sensitivity when interpreting metrics data. • Provide regular feedback to the individuals and teams who collect measures and metrics. • Do not use metrics to appraise individuals. • Set clear goals and metrics that will be used to achieve them. • Never use metrics to threaten individuals and teams. • Metrics that indicate a problem area should not be considered a problem, but an indicator for process improvement. • Do not obsess on a single metric.
  • 11.
    Metrics for Analysismodel In software engineering, metrics for the analysis model help in evaluating the quality and complexity of the analysis artifacts, such as requirements, use cases, and the conceptual model. These metrics ensure that the analysis phase produces a robust foundation for subsequent development activities. Technical work in software engineering begins with the creation of the analysis model. It is at this stage that requirements are derived and that a foundation for design is established. Therefore, technical metrics that provide insight into the quality of the analysis model are desirable. Although relatively few analysis and specification metrics have appeared in the literature, it is possible to adapt metrics derived for project application for use in this context. These metrics examine the analysis model with the intent of predicting the “size” of the resultant system. It is likely that size and design complexity will be directly correlated. Functionality delivered Provides an indirect measure of the functionality that is packaged within the software System size Measures the overall size of the system defined in terms of information available as part of the analysis model Specification quality Provides an indication of the specificity and completeness of a requirements specification
  • 12.
    Function points, informationdomain values and value Adjustment Factors Function Points (FPs) Function points are a standard unit of measure that represents the functional size of a software application. They are calculated based on the functionality delivered to the user, as described in the requirements. The calculation of function points involves identifying and weighting various components of the Information Domain In the context of the analysis model in software engineering, function points, Information Domain Values (IDVs), and Value Adjustment Factors (VAFs) are critical elements used to quantify the functionality and complexity of software systems
  • 13.
    Function points, informationdomain values and value Adjustment Factors Information Domain Values (IDVs) Information Domain Values are the key elements that contribute to the functionality of the software. These elements are categorized and weighted based on their complexity. The five key IDVs are: 1. External Inputs (EI): • User data or control inputs that update internal logical files. • Weighting: Simple (3 FP), Average (4 FP), Complex (6 FP). 2. External Outputs (EO): • User data or control outputs derived from internal logical files. • Weighting: Simple (4 FP), Average (5 FP), Complex (7 FP). 3. External Inquiries (EQ): • User inquiries that retrieve data without modifying the internal logical files. • Weighting: Simple (3 FP), Average (4 FP), Complex (6 FP). 4. Internal Logical Files (ILF): • Logical groups of user data or control information maintained within the system. • Weighting: Simple (7 FP), Average (10 FP), Complex (15 FP). 5. External Interface Files (EIF): • Logical groups of user data or control information referenced by the system but maintained by another system. • Weighting: Simple (5 FP), Average (7 FP), Complex (10 FP).
  • 14.
    Function points, informationdomain values and value Adjustment Factors Steps to Calculate Function Points • Identify and classify each component of the information domain. • Assign weights to each component based on its complexity (simple, average, complex). • Calculate the Unadjusted Function Points (UFP) by summing up the weighted values of all components. Unadjusted Function Points (UFP) The sum of all function points calculated from the information domain components without considering environmental factors. UFP = ∑ (count × weight)
  • 15.
    Function points, informationdomain values and value Adjustment Factors Value Adjustment Factors (VAF) Value Adjustment Factors adjust the UFP to account for various technical and environmental considerations that impact the development and operation of the software. VAF is calculated using 14 General System Characteristics (GSCs), each rated on a scale from 0 (no influence) to 5 (strong influence). General System Characteristics (GSCs) 1) Data communications 2) Distributed data processing 3) Performance objectives 4) Heavily used configuration 5) Transaction rate 6) On-line data entry 7) End-user efficiency 8) On-line update 9) Complex processing 10)Reusability 11)Installation ease 12)Operational ease 13)Multiple sites 14)Facilitate change
  • 16.
    Function points, informationdomain values and value Adjustment Factors Calculating VAF 1. Sum the ratings of all 14 GSCs to get the Total Degree of Influence (TDI). TDI = ∑ (rating of each GSC) 2. Calculate VAF using the following formula: VAF = 0.65 + (0.01 × TDI) Adjusted Function Points (AFP) The final adjusted function points are obtained by multiplying the UFP by the VAF. AFP = UFP × VAF
  • 17.
    Function points, informationdomain values and value Adjustment Factors Example Scenario Consider a simple software system with the following components: • 10 External Inputs (6 simple, 3 average, 1 complex) • 7 External Outputs (4 simple, 2 average, 1 complex) • 5 External Inquiries (3 simple, 1 average, 1 complex) • 2 External Interface Files (1 simple, 1 average) • 3 Internal Logical Files (1 simple, 1 average, 1 complex)
  • 18.
    Function points, informationdomain values and value Adjustment Factors Step - 1: Calculate UFP External Inputs: (6 × 3) + (3 × 4) + (1 × 6) = 18 + 12 + 6 = 36 External Outputs: (4 × 4) + (2 × 5) + (1 × 7) = 16 + 10 + 7 = 33 External Inquiries: (3 × 3) + (1 × 4) + (1 × 6) = 9 + 4 + 6 = 19 Internal Logical Files: (1 × 7) + (1 × 10) + (1 × 15) = 7 + 10 + 15 = 32 External Interface Files: (1 × 5) + (1 × 7) = 5 + 7 = 12 UFP = 36 + 33 + 19 + 32 + 12 = 132
  • 19.
    Function points, informationdomain values and value Adjustment Factors Step - 2: Calculate VAF Assume the GSCs have been rated and the TDI is calculated as 25. VAF = 0.65 + (0.01 × 25) = 0.65 + 0.25 = 0.90 Step - 3: Calculate AFP AFP = 132 × 0.90 = 118.8 The adjusted function points for this software system are 118.8, which provides a quantitative measure of the software's functional size and complexity.
  • 20.
  • 21.
    Architectural Design Metrics Architecturaldesign metrics in software engineering are quantitative measures used to assess the quality and effectiveness of the software architecture. These metrics help in evaluating the architectural design's attributes such as modularity, maintainability, reusability, performance, and scalability.
  • 22.
  • 23.
    Metrics for Object-OrientedDesign In a detailed treatment of software metrics for OO systems, Whitmire describes nine distinct and measurable characteristics of an OO design: Size: Size is defined in terms of four views: population, volume, length, and functionality. • Population is measured by taking a static count of OO entities such as classes or operations. • Volume measures are identical to population measures but are collected dynamically—at a given instant of time. • Length is a measure of a chain of interconnected design elements (e.g., the depth of an inheritance tree is a measure of length). • Functionality metrics provide an indirect indication of the value delivered to the customer by an OO application. In reality, technical metrics for OO systems can be applied not only to the design model, but also the analysis model. In the sections that follow, we explore metrics that provide an indication of quality at the OO class level and the operation level. In addition, metrics applicable for project management and testing are also explored
  • 24.
    Metrics for Object-OrientedDesign Complexity: Like size, there are many differing views of software complexity . Whitmire views complexity in terms of structural characteristics by examining how classes of an OO design are interrelated to one another. Coupling: The physical connections between elements of the OO design (e.g., the number of collaborations between classes or the number of messages passed between objects) represent coupling within an OO system. Sufficiency: Whitmire defines sufficiency as “the degree to which an abstraction possesses the features required of it, or the degree to which a design component possesses features in its abstraction, from the point of view of the current application.” Stated another way, we ask: “What properties does this abstraction (class) need to possess to be useful to me?” . In essence, a design component (e.g., a class) is sufficient if it fully reflects all properties of the application domain object that it is modeling— that is, that the abstraction (class) possesses the features required of it. Completeness: The only difference between completeness and sufficiency is “the feature set against which we compare the abstraction or design component.” Sufficiency compares the abstraction from the point of view of the current application. Completeness considers multiple points of view, asking the question: “What properties are required to fully represent the problem domain object?” Because the criterion for completeness considers different points of view, it has an indirect implication about the degree to which the abstraction or design component can be reused.
  • 25.
    Metrics for Object-OrientedDesign Cohesion: Like its counterpart in conventional software, an OO component should be designed in a manner that has all operations working together to achieve a single, well-defined purpose. The cohesiveness of a class is determined by examining the degree to which “the set of properties it possesses is part of the problem or design domain”. Primitiveness: A characteristic that is similar to simplicity, primitiveness (applied to both operations and classes) is the degree to which an operation is atomic—that is, the operation cannot be constructed out of a sequence of other operations contained within a class. A class that exhibits a high degree of primitiveness encapsulates only primitive operations. Similarity: The degree to which two or more classes are similar in terms of their structure, function, behavior, or purpose is indicated by this measure. Volatility: As we have seen earlier in this book, design changes can occur when requirements are modified or when modifications occur in other parts of an application, resulting in mandatory adaptation of the design component in question. Volatility of an OO design component measures the likelihood that a change will occur.
  • 26.
    Class-Oriented Metrics Chidamber andKemerer (CK) Metrics Suite WMC: weighted methods per class (Σ complexity of methods)(low) • Amount of effort required to implement and test a class • Complexity of inheritance tree. • Application specific and limiting potential reuse DIT: depth of the inheritance tree (Max length node to root) (low) • Lower level classes inherit many methods. „ • Difficulties when attempting to predict the behavior of a class. • Greater design complexity. • Positive: many methods may be reused NOC: number of children: ( subclasses that are immediately subordinate to a class) • High: reuse, the amount of test
  • 27.
    Class-Oriented Metrics CBO: couplingbetween object classes (No. of coll. in CRC) (low) • High: reusability will decrease, complicated testing and modifications • This is consistent with the general guideline to reduce coupling. to reduce coupling. RFC: response for a class (No. of methods in response set) (low) „ • High: Testing and overall design complexity will increase. LCOM: lack of cohesion in methods LCOM : No of methods that access one or more of the same attributes • High: Complexity of class design, class might be better designed by breaking it into two or more separate classes. • It is desirable to keep cohesion high; that is keep LCOM low. It is desirable to keep cohesion high; that is keep LCOM low.
  • 28.
    MOOD Metrics Set Harrison,Counsell, and Nithi propose a set of metrics for object-oriented design that provide quantitative indicators for OO design characteristics. A sampling of MOOD metrics follows:  Method inheritance factor (MIF).  Coupling factor (CF)
  • 29.
    MOOD Metrics Set Methodinheritance factor (MIF) The degree to which the class architecture of an OO system makes use of inheritance for both methods (operations) and attributes is defined as MIF = Mi(Ci)/ Ma(Ci) where the summation occurs over i = 1 to TC. TC is defined as the total number of classes in the architecture, Ci is a class within the architecture, and Ma(Ci) = Md(Ci) + Mi(Ci) where Ma(Ci) = the number of methods that can be invoked in association with Ci. Md(Ci) ) = the number of methods declared in the class Ci. Mi(Ci) = the number of methods inherited (and not overridden) in Ci. The value of MIF (the attribute inheritance factor, AIF, is defined in an analogous manner) provides an indication of the impact of inheritance on the OO software.
  • 30.
    MOOD Metrics Set Couplingfactor (CF) Earlier in this chapter we noted that coupling is an indication of the connections between elements of the OO design. The MOOD metrics suite defines coupling in the following way: CF = Σi Σj is_client (Ci, Cj)]/(TC2 -TC) where the summations occur over i = 1 to TC and j = 1 to TC. The function is_client = 1, if and only if a relationship exists between the client class, Cc, and the server class, Cs, and Cc ≠ Cs = 0, otherwise Although many factors affect software complexity, understandability, and maintainability, it is reasonable to conclude that, as the value for CF increases, the complexity of the OO software will also increase and understandability, maintainability, and the potential for reuse may suffer as a result.
  • 31.
    Metrics For SourceCode Halstead proposed the first analytical "laws" for computer software. Software science assigns quantitative laws to the development of computer software, using a set of primitive measures that may be derived after code is generated or estimated once design is complete. These follow: n1 = the number of distinct operators that appear in a program. n2 = the number of distinct operands that appear in a program. N1 = the total number of operator occurrences. N2 = the total number of operand occurrences. Halstead uses these primitive measures to develop expressions for the overall program length, potential minimum volume for an algorithm, the actual volume (number of bits required to specify a program), the program level (a measure of software complexity), the language level (a constant for a given language), and other features such as development effort, development time, and even the projected number of faults in the software. Halstead shows that length N can be estimated : N = n1 log 2 n1 + n2 log 2 n2 and program volume may be defined : V = N log 2 (n1 + n2) It should be noted that V will vary with programming language and represents the volume of information (in bits) required to specify a program. Theoretically, a minimum volume must exist for a particular algorithm. Halstead defines a volume ratio L as the ratio of volume of the most compact form of a program to the volume of the actual program. In actuality, L must always be less than 1. In terms of primitive measures, the volume ratio may be expressed as L = 2/n1 x n2/N2
  • 32.
    Metrics For Testing Testingeffort can also be estimated using metrics derived from Halstead measures. Using the definitions for program volume, V, and program level, PL, software science effort, e, can be computed as PL = 1/[(n1/2)•(N2/n2)] (1) e = V/PL (2) The percentage of overall testing effort to be allocated to a module k can be estimated using the following relationship: percentage of testing effort (k) = e(k)/ e(i) (3) where e(k) is computed for module k using Equations (1) and (2) and the summation in the denominator of Equation (4) is the sum of software science effort across all modules of the system.
  • 33.
    Metrics For ObjectOriented Testing The design metrics provide an indication of design quality. They also provide a general indication of the amount of testing effort required to exercise an OO system. Binder suggests a broad array of design metrics that have a direct influence on the “testability” of an OO system. The metrics are organized into categories that reflect important design characteristics. Encapsulation Lack of cohesion in methods (LCOM):The higher the value of LCOM, the more states must be tested to ensure that methods do not generate side effects. Percent public and protected (PAP): Public attributes are inherited from other classes and therefore visible to those classes. Protected attributes are a specialization and private to a specific subclass. This metric indicates the percentage of class attributes that are public. High values for PAP increase the likelihood of side effects among classes. Tests must be designed to ensure that such side effects are uncovered.
  • 34.
    Metrics For ObjectOriented Testing Public access to data members (PAD): This metric indicates the number of classes (or methods) that can access another class’s attributes, a violation of encapsulation. High values for PAD lead to the potential for side effects among classes. Tests must be designed to ensure that such side effects are uncovered. Inheritance Number of root classes (NOR): This metric is a count of the distinct class hierarchies that are described in the design model. Test suites for each root class and the corresponding class hierarchy must be developed. As NOR increases, testing effort also increases. Fan-in (FIN): When used in the OO context, fan-in is an indication of multiple inheritance. FIN > 1 indicates that a class inherits its attributes and operations from more than one root class. FIN > 1 should be avoided when possible. Number of children (NOC) and depth of the inheritance tree (DIT): Superclass methods will have to be retested for each subclass.
  • 35.
    Metrics For Maintenance IEEEStd. 982.1-1988 suggests a software maturity index (SMI) that provides an indication of the stability of a software product (based on changes that occur for each release of the product). The following information is determined: MT = the number of modules in the current release Fc = the number of modules in the current release that have been changed Fa = the number of modules in the current release that have been added Fd = the number of modules from the preceding release that were deleted in the current release The software maturity index is computed in the following manner: SMI = [MT (Fa + Fc + Fd)]/MT As SMI approaches 1.0, the product begins to stabilize. SMI may also be used as metric for planning software maintenance activities. The mean time to produce a release of a software product can be correlated with SMI and empirical models for maintenance effort can be developed
  • 36.
    Software Measurement Measurement enablesus to gain insight into the process and the project by providing a mechanism for objective evaluation. • Basic quality and productivity data are collected. These data are then analyzed, compared against past averages, and assessed to determine whether quality and productivity improvements have occurred. • Metrics are also used to pinpoint problem areas so that remedies can be developed and the software process can be improved. Size Oriented Metrics Object Oriented Metrics Function Oriented Metrics Use Case Oriented Metrics Web Engineering Project Metrics
  • 37.
    Size Oriented Metrics Size-orientedmetrics are derived by normalizing quality and productivity Point Metrics measures by considering the size of the software that has been produced. The organization builds a simple record of size measure for the software projects. It is built on past experiences of organizations. It is a direct measure of software. This metric measure is one of the simplest and earliest metrics that is used for computer programs to measure size. Size Oriented Metrics are also used for measuring and comparing the productivity of programmers. It is a direct measure of a Software. The size measurement is based on lines of code computation. The lines of code are defined as one line of text in a source file. While counting lines of code, the simplest standard is: • Don’t count blank lines • Don’t count comments • Count everything else • The size-oriented measure is not a universally accepted method.
  • 38.
    Size Oriented Metrics Asimple set of size measures that can be developed is given below: Size = Kilo Lines of Code (KLOC) Effort = Person / month Productivity = KLOC / person-month Quality = Number of faults / KLOC Cost = $ / KLOC Documentation = Pages of documentation / KLOC Example of Size-Oriented Metrics For a size oriented metrics, software organization maintains records in tabular form. The typical table entries are: Project Name, LOC, Efforts, Pages of documents, Errors, Defects, Total number of people working on it.
  • 39.
    Size Oriented Metrics Conclusion Inconclusion, size-oriented measures are a useful tool for making software because they are easy to use, standardised, and can be used to estimate. They do, however, have some problems, such as being dependent on the computer language and possibly having trouble with early-stage estimates. Companies can make better choices and improve their software development processes if they know these measures and how to use them.
  • 40.
    Function Oriented Metrics Function-OrientedMetrics is a method that is developed by Albrecht in 1979 for IBM (International Business Machine). He simply suggested a measure known as Function points that are derived using an empirical relationship that is based on countable measures of software’s information or requirements domain and assessments of the complexity of software. Function-Oriented Metrics are also known as Function Point Model. This model generally focuses on the functionality of the software application being delivered. These methods are actually independent of the programming language that is being used in software applications and based on calculating the Function Point (FP). A function point is a unit of measurement that measures the business functionality provided by the business product. To determine whether or not a particular entry is simple, easy, average, or complex, a criterion is needed and should be developed by the organization. With the help of observations or experiments, the different weighing factors should be determined as shown below in the table. With the help of these tables, the count table can be computed.
  • 41.
  • 42.
    Function Oriented Metrics Thesoftware complexity can be computed by answering the following questions : • Does the system need reliable backup and recovery? • Are data communications required? • Are there distribute processing functions? • Is the performance of the system critical? • Can the system be able to run in an existing, heavily, and largely utilized operational environment? • Does the system require on-line data entry? • Does the input transaction is required by the on-line data entry to be built over multiple screens or operations? • Are the master files updated on-line? • Are the inputs, outputs, files, or inquiries complex? • Is the internal processing complex? • Is the code which is designed to be reusable? • Are conversion and installation included in the design? • Is the system designed for multiple installations in various organizations whenever required? • Is the application designed to facilitate or make the change and provide effective ease of use by the user?
  • 43.
    Object Oriented Metrics •Number of scenario scripts (NSS): The number of scenario scripts or use-cases is directly proportional to the number of classes required to meet requirements; the number of states for each class; and the number of methods, attributes, and collaborations. NSS is a strong indicator of program size. • Number of key classes (NKC): A key class focuses directly on the business domain for the problem and will have a lower probability of being implemented via reuse. For this reason, high values for NKC indicate substantial development work. Lorenz and Kidd suggest that between 20 and 40 percent of all classes in a typical OO system are key classes. The remainder support infrastructure (GUI, communications, databases, etc.). • Number of support classes: Support classes are required to implement the system but are not immediately related to the problem domain. Examples might be user interface (UI) classes, database access and manipulation classes, and computation classes. In addition, support classes can be developed for each of the key classes. Support classes are defined iteratively throughout an evolutionary process. The number of support classes is an indication of the amount of effort required to develop the software and also an indication of the potential amount of reuse to be applied during system development The following metrics can provide insight w.r.t object Oriented Metrics:
  • 44.
    Object Oriented Metrics •Average number of support classes per key class. In general, key classes are known early in the project. Support classes are defined throughout. If the average number of support classes per key class were known for a given problem domain, estimating (based on total number of classes) would be greatly simplified. • Number of subsystems (NSUB). The number of subsystems provides insight into resource allocation, scheduling (with particular emphasis on parallel development) andoverall integration effort.
  • 45.
    Use Case OrientedMetrics • Use cases are used widely as a method for describing customer-level or business domain requirements that imply software features and functions. • It would seem reasonable to use the use case as a normalization measure similar to LOC or FP. Like FP, the use case is defined early in the software process, allowing it to be used for estimation before significant modeling and construction activities are initiated. • Use cases describe (indirectly, at least) user-visible functions and features that are basic requirements for a system. The use case is independent of programming language. In addition, the number of use cases is directly proportional to the size of the application in LOC and to the number of test cases that will have to be designed to fully exercise the application. • Researchers have suggested use-case points (UCPs) as a mechanism for estimating project effort and other characteristics. The UCP is a function of the number of actors and transactions implied by the use-case models and is analogous to the FP in some ways.
  • 46.
    Web Engineering ProjectMetrics The objective of all WebApp projects is to deliver a combination of content and functionality to the end user. Measures and metrics used for traditional software engineering projects are difficult to translate directly to Web Apps. Yet, it is possible to develop a database that allows assess to internal productivity and quality. measures derived over a number of projects. Among the measures that can be collected are mentioned here. • Number of static Web pages. These pages represent low relative complexity and generally require less effort to construct than dynamic pages. This measure provides an indication of the overall size of the application and the effort required to develop it. • Number of dynamic Web pages. These pages represent higher relative complexity and require more effort to construct than static pages. This measure provides an indication of the overall size of the application and the effort required to develop it. • Number of internal page links. This measure provides an indication of the degree of architectural coupling within the WebApp. As the number of page links increases, the effort expended on navigational design and construction also increases.
  • 47.
    Web Engineering ProjectMetrics • Number of persistent data objects. As the number of persistent data objects (e.g., a database or data fi le) grows, the complexity of the WebApp also grows and the effort to implement it increases proportionally. • Number of external systems interfaced. As the requirement for interfacing grows, system complexity and development effort also increase. • Number of static content objects. These objects represent low relative complexity and generally require less effort to construct than dynamic pages. • Number of dynamic content objects. These objects represent higher relative complexity and require more effort to construct than static pages. • Number of executable functions. As the number of executable functions (e.g., a script or applet) increases, modeling and construction effort also increase.
  • 48.
  • 49.
    Metrics for SoftwareQuality • Must use technical measures to evaluate quality in objective, rather than subjective ways. • Must evaluate quality as the project progresses. • The primary thrust is to measure errors and defects → metrics provide indication of the effectiveness software quality assurance and control activities.
  • 50.
    Measuring Quality Correctness: defectsper KLOC Maintainability: the ease that a program can be corrected, adapted, and enhanced. Time/cost. • Time-oriented metrics: Mean-time-to-change (MTTC) • Cost-oriented metrics: Spoilage – cost to correct defects encountered Integrity: ability to withstand attacks • Threat: the probability that an attack of a specific type will occur within a given time. • Security: the probability that the attack of a specific type will be repelled. Integrity = sum [(1 – threat)x(1 – security)] Usability: attempt to quantify “user-friendliness” in terms of four characteristics: 1) The physical/intellectual skill to learn the system 2) The time required to become moderately efficient in the use of the system 3) The net increase of productivity 4) A subjective assessment of user attitude toward the system(e.g., use of questionnaire).
  • 51.
    Defect Removal Efficiency DefectRemoval Efficiency (DRE) is the litmus test of your software testing prowess, encapsulating the effectiveness of your defect identification and resolution endeavors. Think of DRE as a magnifying glass that zooms in on the accuracy and thoroughness of your testing process. The Defect Removal Efficiency (DRE) formula acts as a compass in determining and quantifying the quality of your software product. The formula for computing the defect removal efficiency is: DRE (%) = [Total Defects Found in Testing / (Total Defects Found in Testing + Total Defects Found in Production)] x 100 Where: Total Defects Found in Testing: The number of defects discovered during the testing phase of software development. Total Defects Found in Production: The number of defects reported by users or detected after the software has been released to the production environment.