1
Software Quality
CIS 375
Bruce R. Maxim
UM-Dearborn
2
Software Quality Principles
• Conformance to software requirements is the
foundation from which quality is measured.
• Specified standards define a set of
development criteria that guide the manner in
which software is engineered.
• Software quality is suspect when a software
product conforms to its explicitly stated
requirements and fails to conform to the
customer's implicit requirements (e.g. ease of
use).
3
McCall’s Quality Factors
• Product Operation
– Correctness
– Efficiency
– Integrity
– Reliability
– Usability
• Product Revision
– Flexibility
– Maintainability
– Testability
• Product Transition
– Interoperability
– Portability
– Reusability
4
McCall’s Quality Factors
5
McCall’s Software Metrics
• Auditability
• Accuracy
• Communication
commonality
• Completeness
• Consistency
• Data commonality
• Error tolerance
• Execution efficiency
• Expandability
• Generality
• Hardware independence
• Instrumentation
• Modularity
• Operability
• Security
• Self-documentation
• Simplicity
• Software system
independence
• Traceability
• Training
6
FURPS Quality Factors
• Functionality
• Usability
• Reliability
• Performance
• Supportability
7
ISO 9126 Quality Factors
• Functionality
• Reliability
• Usability
• Efficiency
• Maintainability
• Portability
8
Measurement Process - 1
• Formulation
– derivation of software measures and metrics
appropriate for software representation being
considered
• Collection
– mechanism used to accumulate the date used to
derive the software metrics
• Analysis
– computation of metrics
9
Measurement Process - 2
• Interpretation
– evaluation of metrics that results in gaining
insight into quality of the work product
• Feedback
– recommendations derived from
interpretation of the metrics is transmitted
to the software development team
10
Technical Metric Formulation
• The objectives of measurement should be
established before collecting any data.
• Each metric is defined in an unambiguous
manner.
• Metrics should be based on a theory that is
valid for the application domain.
• Metrics should be tailored to accommodate
specific products and processes
11
Software Metric Attributes
• Simple and computable
• Empirically and intuitively persuasive
• Consistent and objective
• Consistent in use of units and measures
• Programming language independent
• Provides an effective mechanism for
quality feedback
12
Representative Analysis Metrics
• Function-based metrics
• Bang metric
– function strong or data strong
• Davis specification quality metrics
13
Specification Quality Metrics - 1
nn = nf + nnf
nn = requirements & specification .
nf = functional .
nnf = non-functional .
Specificity, Q1 = nai/qr
nai = # of requirements with reviewer
agreement.
14
Specification Quality Metrics - 2
Completeness, Q2 = nu / (ni * ns)
nu = unique functions.
ni = # of inputs.
ns = # of states.
Overall completeness, Q3 = nc / (nc + nnv)
nc = # validated & correct.
nnv = # not validated.
15
Representative Design Metrics - 1
• Architectural design metrics
– Structural complexity (based on module fanout)
– Data complexity (based on module interface
inputs and outputs)
– System complexity (sum of structural and data
complexity)
– Morphology (number of nodes and arcs in
program graph)
– Design structure quality index (DSQI)
16
Representative Design Metrics - 2
• Component-level design metrics
– Cohesion metrics (data slice, data tokens,
glue tokens, superglue tokens, stickiness)
– Coupling metrics (data and control flow,
global, environmental)
– Complexity metrics (e.g. cyclomatic
complexity)
• Interface design metrics (e.g. layout
appropriateness)
17
Halstead’s Software Science
Source Code Metrics
• Overall program length
• Potential minimum algorithm volume
• Actual algorithm volume
– number of bits used to specify program
• Program level
– software complexity
• Language level
– constant for given language
18
Testing Metrics
• Metrics that predict the likely number of
tests required during various testing
phases
• Metrics that focus on test coverage for a
given component
19
Estimating Number of Errors
Error Seeding - 1
(s / S) = (n / N)
S = # of seeded errors
s = # seeded errors found
N = # of actual errors
n = # of actual errors found so far
20
Estimating Number of Errors
Error Seeding – 2
E(1) = (x / n) =
(# of real errors found by 1/ total # of real errors) =
q / y =
(# errors found by both / # real errors found by 2)
E(2) = (y / n) = q / x =
(# real errors found by both / # found by 1)
n = q/(E(1) * E(2))
21
Estimating Number of Errors
Error Seeding – 3
Assume
x = 25
y = 30
q = 15
E(1) = (15 / 30) = .5
E(2) = (15 / 25) = .6
n = [15 / (.5)(.6)] = 50 errors
22
Software Confidence
S = # of seeded errors.
N = # of actual errors.
C (confidence level) = 1 if n > N
C (confidence level) =
[S / (S – N + 1)] if n <= N
Example: N = 0 and S = 10
C = 10/(10 – 0 + 1) = 10/11  91%
23
Confidence Example
How many seeded errors need to be used
and found to have a 90% confidence a
program is bug free?
N = 0
C = S/(S – 0 + 1) = 98/100
Solving for S
S = 49
24
Failure Intensity
• Suppose that intensity is proportional to # of
faults or errors present at the start of testing.
– function A has 90% of duty time
– function B has 10% of duty time
• Suppose there are 100 total errors
50 in A, 50 in B
(.9)50K + (.1)50K = 50K
(.1)50K = 5K
or
(.9)50k = 45K
25
Maintenance Metrics
Software Maturity Index
SMI = [Mt = (Fa + Fc + Fd)]/Mt
Mt = number of modules in current release.
Fa = modules added.
Fc = modules changed.
Fd = modules deleted.
• SMI approaches 1.0 as product begins to
stabilize

lec27.ppt

  • 1.
  • 2.
    2 Software Quality Principles •Conformance to software requirements is the foundation from which quality is measured. • Specified standards define a set of development criteria that guide the manner in which software is engineered. • Software quality is suspect when a software product conforms to its explicitly stated requirements and fails to conform to the customer's implicit requirements (e.g. ease of use).
  • 3.
    3 McCall’s Quality Factors •Product Operation – Correctness – Efficiency – Integrity – Reliability – Usability • Product Revision – Flexibility – Maintainability – Testability • Product Transition – Interoperability – Portability – Reusability
  • 4.
  • 5.
    5 McCall’s Software Metrics •Auditability • Accuracy • Communication commonality • Completeness • Consistency • Data commonality • Error tolerance • Execution efficiency • Expandability • Generality • Hardware independence • Instrumentation • Modularity • Operability • Security • Self-documentation • Simplicity • Software system independence • Traceability • Training
  • 6.
    6 FURPS Quality Factors •Functionality • Usability • Reliability • Performance • Supportability
  • 7.
    7 ISO 9126 QualityFactors • Functionality • Reliability • Usability • Efficiency • Maintainability • Portability
  • 8.
    8 Measurement Process -1 • Formulation – derivation of software measures and metrics appropriate for software representation being considered • Collection – mechanism used to accumulate the date used to derive the software metrics • Analysis – computation of metrics
  • 9.
    9 Measurement Process -2 • Interpretation – evaluation of metrics that results in gaining insight into quality of the work product • Feedback – recommendations derived from interpretation of the metrics is transmitted to the software development team
  • 10.
    10 Technical Metric Formulation •The objectives of measurement should be established before collecting any data. • Each metric is defined in an unambiguous manner. • Metrics should be based on a theory that is valid for the application domain. • Metrics should be tailored to accommodate specific products and processes
  • 11.
    11 Software Metric Attributes •Simple and computable • Empirically and intuitively persuasive • Consistent and objective • Consistent in use of units and measures • Programming language independent • Provides an effective mechanism for quality feedback
  • 12.
    12 Representative Analysis Metrics •Function-based metrics • Bang metric – function strong or data strong • Davis specification quality metrics
  • 13.
    13 Specification Quality Metrics- 1 nn = nf + nnf nn = requirements & specification . nf = functional . nnf = non-functional . Specificity, Q1 = nai/qr nai = # of requirements with reviewer agreement.
  • 14.
    14 Specification Quality Metrics- 2 Completeness, Q2 = nu / (ni * ns) nu = unique functions. ni = # of inputs. ns = # of states. Overall completeness, Q3 = nc / (nc + nnv) nc = # validated & correct. nnv = # not validated.
  • 15.
    15 Representative Design Metrics- 1 • Architectural design metrics – Structural complexity (based on module fanout) – Data complexity (based on module interface inputs and outputs) – System complexity (sum of structural and data complexity) – Morphology (number of nodes and arcs in program graph) – Design structure quality index (DSQI)
  • 16.
    16 Representative Design Metrics- 2 • Component-level design metrics – Cohesion metrics (data slice, data tokens, glue tokens, superglue tokens, stickiness) – Coupling metrics (data and control flow, global, environmental) – Complexity metrics (e.g. cyclomatic complexity) • Interface design metrics (e.g. layout appropriateness)
  • 17.
    17 Halstead’s Software Science SourceCode Metrics • Overall program length • Potential minimum algorithm volume • Actual algorithm volume – number of bits used to specify program • Program level – software complexity • Language level – constant for given language
  • 18.
    18 Testing Metrics • Metricsthat predict the likely number of tests required during various testing phases • Metrics that focus on test coverage for a given component
  • 19.
    19 Estimating Number ofErrors Error Seeding - 1 (s / S) = (n / N) S = # of seeded errors s = # seeded errors found N = # of actual errors n = # of actual errors found so far
  • 20.
    20 Estimating Number ofErrors Error Seeding – 2 E(1) = (x / n) = (# of real errors found by 1/ total # of real errors) = q / y = (# errors found by both / # real errors found by 2) E(2) = (y / n) = q / x = (# real errors found by both / # found by 1) n = q/(E(1) * E(2))
  • 21.
    21 Estimating Number ofErrors Error Seeding – 3 Assume x = 25 y = 30 q = 15 E(1) = (15 / 30) = .5 E(2) = (15 / 25) = .6 n = [15 / (.5)(.6)] = 50 errors
  • 22.
    22 Software Confidence S =# of seeded errors. N = # of actual errors. C (confidence level) = 1 if n > N C (confidence level) = [S / (S – N + 1)] if n <= N Example: N = 0 and S = 10 C = 10/(10 – 0 + 1) = 10/11  91%
  • 23.
    23 Confidence Example How manyseeded errors need to be used and found to have a 90% confidence a program is bug free? N = 0 C = S/(S – 0 + 1) = 98/100 Solving for S S = 49
  • 24.
    24 Failure Intensity • Supposethat intensity is proportional to # of faults or errors present at the start of testing. – function A has 90% of duty time – function B has 10% of duty time • Suppose there are 100 total errors 50 in A, 50 in B (.9)50K + (.1)50K = 50K (.1)50K = 5K or (.9)50k = 45K
  • 25.
    25 Maintenance Metrics Software MaturityIndex SMI = [Mt = (Fa + Fc + Fd)]/Mt Mt = number of modules in current release. Fa = modules added. Fc = modules changed. Fd = modules deleted. • SMI approaches 1.0 as product begins to stabilize