Software Quality Assurance
Robert L. Straitt
University of Corsica
Corte, Corsica
2
Software Quality Assurance
Dr. Robert L. Straitt
• This course is presented to introduce students to the
Software Quality Assurance principals as practiced in
industry today.
• The Course is presented in English because this has
become the international language of software
development and the students mastering of technical
English can greatly increase the marketability to industry.
• This presentation in whole or part can be used and referred
to by students during the end of course exam.
3
Software Quality Assurance
Dr. Robert L. Straitt
Notice for Students
• Students should pay careful attention to the sections of this
presentation that are underlined, or underlined and bolded
as these the most important point for the lectures and are
the source material for questions on the exam.
4
Software Quality Assurance
• Overview of Software Quality Assurance
• Quality Attribute Design Primitives
• Defect Prevention
• Software Metrics
• Process Improvement
5
Software Quality Assurance
Overview of Software Quality
Assurance
6
• Software Quality Assurance (SQA) is a group of related
activities employed throughout the software life cycle to
positively influence and quantify the quality of the delivered
software. SQA is not exclusively associated with any major
software development activity, but spans the entire software life
cycle.
• It consists of both process and product assurance. Its methods
include assessment activities such as ISO 9000 and CBA IPI
(CMM-Based Appraisal for Internal Process Improvement),
analysis functions such as reliability prediction and causal
analysis, and direct application of defect detection methods such
as formal inspection and testing.
Introduction
7
SOFTWARE
QUALITY
FACTOR DEFINITION CANDIDATEMETRIC
Correctness Extent towhichthesoftware
conformstospecifications
andstandards
Defects
LOC
Efficiency Relativeextent towhicha
resourceisutilized(i.e.,
storage, space, processing
time, communicationtime)
Actual sourseUtilization
Allocated sourseUtilization
Re
Re
Expandability Relativeeffort toincrease
softwarecapabilityor
performancebyenhancing
current functionsor by
addingnewfunctionsor
data
EffortToExpand
EffortToDevelop
Flexibility Easeof effort for changing
softwaremissions,
functions, or datatosatisfy
otherrequirements
  005. AvgLaborDaysToChange
Integrity Extent towhichthesoftware
will performwithout failure
duetounauthorizedaccess
tothecodeor data
Defects
LOC
Interoperability Relativeeffort tocouplethe
softwareof onesystemto
thesoftwareof another
EffortToCouple
EffortToDevelop
Maintainability Easeof effort for locating
andfixingasoftwarefailure
withinaspecifiedtime
period
  01. AvgLaborDaysToFix
Portability Relativeeffort totransport
thesoftwarefor usein
EffortToTransport
EffortToDevelop
duetounauthorizedaccess
tothecodeordata
Interoperability Relativeefforttocouplethe
softwareofonesystemto
thesoftwareofanother
EffortToCouple
EffortToDevelop
Maintainability Easeofeffortforlocating
andfixingasoftwarefailure
withinaspecifiedtime
period
  01. AvgLaborDaysToFix
Portability Relativeefforttotransport
thesoftwareforusein
anotherenvironment
(hardwareconfiguration,
and/orsoftwaresystem
environment)
EffortToTransport
EffortToDevelop
Reliability Extenttowhichthesoftware
willperformwithoutany
failureswithinaspecified
timeperiod
Defects
LOC
Reusability Relativeefforttoconverta
softwarecomponentforuse
inanotherapplication
EffortToConvert
EffortToDevelop
Survivability Extenttowhichthesoftware
willperformandsupport
critical functionswithout
failurewithinaspecified
timeperiodwhenaportion
ofthesystemisinoperable
Defects
LOC
Usability Relativeeffortforusing
software(trainingand
operation,e.g.,
familiarization,input
preparation,execution,
outputinterpretation)
LaborDaysToUse
LaborYearsToDevelop
Verifiability Relativeefforttoverifythe
specifiedsoftwareoperation
andperformance
EffortToVerify
EffortToDevelop
8
Process & Product Assurance
• Software Quality Assurance is the activities of verifying the ability of the
software process to produce a software product that is fit for use.
• SQA provides senior management with feedback on process compliance.
• It provides Senior management with an early warning of quality
deficiencies and recommendations to address the situation.
• Software quality assurance personnel MUST have a reporting path which
is independent of the management responsible for the activities being
evaluated for quality to prevent conflicts with project management goals of
product delivery encourages adherence to the official process.
• The methods typically used to accomplish process assurance include SQA
audit and reporting, assessment and statistical process control analysis.
9
Process & Product Assurance
• Assurance that the final product performs as specified is the role of quality
assurance. This includes in process methods, as well as some methods that
involve independent oversight.
• The purpose of embedded SQA processes include activities that are part of
the development life cycle which will “build-in” the desired product
quality. This focus allows identification and elimination of defects as early in
the life cycle as possible, thus reducing maintenance and test costs.
Embedded SQA methods include formal inspection, reviews, and testing.
• Independent oversight functions such as independent test function, or testing
which is witnessed by an independent entity such as SQA, are other methods
of providing product assurance. Other methods include tests witnessed by
customers, expert review of test results, or audits of the product.
10
Process & Product Assurance
• Remember that for each unique product there is only one
unique process that will produce it.
Process = Product
A = B
Design Requirement + Process = Product + Required Characteristics
X+A = B+X
11
Methods and Technologies
• Many methods are used to perform the quality assurance
functions. Audits are used to examine the conformance of a
development process to procedures and of products to
standards. Embedded SQA activities, which have the purpose
of detecting and removing errors, take a variety of forms,
including inspection and testing. Assessment is another method
of process assurance. Analysis techniques, such as causal
analysis, reliability prediction and statistical process control,
help ensure both process and product conformance.
12
Methods and Technologies
Embedded SQA Error
Detection Methods
• Audits
• Formal Inspection
• Reviews
• Walkthroughs
• Testing
• Assessment
• Causal Analysis/Defect
Prevention
• Reliability Prediction
• Statistical Process Control
13
Methods and Technologies
Audits
• Audits are embedded into the software life cycle, as well as being performed
by independent SQA outside of the lifecycle.
• An external audit focuses on the adherence to established standards and
procedures. Evaluation of the effectiveness of the process is part of an SQA
audit. This type of audit examines records according to a sampling process to
determine if procedures are being followed correctly.
• In general, an embedded audit examines work products to determine if the
products conform to standards and if project status is accurate. Manual or
automated methods can be used.
14
Methods and Technologies
Formal Inspection
• Formal inspection is an examination of a software product at different
stages of the development process it can utilize checklists, expert
inspectors, and a trained inspection moderator. The objective is to
identify defects in the product and the process deficiency that
allowed for them to exist
• Certain projects which have an effectively performing inspection
process report better than 80% defect detection rates.[1]
• [1] Tom Gilb and Dorothy Graham, Software Inspection (Wokingham, England: Addison-Wesley Publishing,
1993), pp. 18-19.
15
Methods and Technologies
Reviews
• Reviews are performed in between formal inspections as an SQA method.
Informal design and code review methods are generally done at the
discretion of the product manager, but should always follow a defined
process and should be summarized and reported at the project level.
• The term “review” is also used to refer to project meetings (e.g., a product
design review) which emphasize resolving issues and which have a primary
objective of assessing the current state of the product.[1]
•
[1] Robert G. Ebenau and Susan H. Strauss, Software Inspection Process (New York: McGraw-Hill, 1994),
pp.283-285.
16
Methods and Technologies
Walkthroughs
• Walkthroughs are meetings in which the author of the
product acts as presenter to proceed through the material in a
stepwise manner.
• The objective is often raising and/or resolving design or
implementation issues.
• Walkthroughs can be more informal but should follow a defined
walkthrough process and documented.
17
Methods and Technologies
Testing
• Testing is a dynamic analysis technique with the objective of error
detection.
• Testing of software is performed on individual components during
intermediate stages of development, subsystems following
integration, and entire software systems.
• Testing involves execution of the software and evaluation of its
behavior in response to a set of input against documented, required
behavior.
18
Methods and Technologies
Assessment
• Assessment is determining the capability of a process through
comparison with a standard. Examples of standard assessment
methods which are frequently employed are ISO 9000 and SEI SW-
CMM®.
• The Software Engineering Institute (SEI) at Carnegie Mellon
University was established by the Air Force in 1984 to improve the
practice of software engineering. The Software Capability Maturity
Model (SW-CMM.®.) is a model for software process assessment.
19
Methods and Technologies
Assessment (cont.)
• The ISO 9001 international standard addresses quality requirements
across industries. The requirements within the standard are written
in a generic manner. The ISO 9000-3 document gives guidance for
applying the ISO 9001 standard to software.
• Assessments use outside individuals to perform a combination of
random auditing and interviewing to answer a list of questions
which is tailored to fit the organization being assessed.
20
Methods and Technologies
Causal Analysis/Defect Prevention
• The purpose of these activities is to probe for process
weaknesses that allowed defects to be inserted into the
product before they occur. Root cause analysis, which
may include developers and other analysts, determines the
root cause of the defects found. Process improvement
initiatives are often generated and passed on to a process
management team.
• It is important that the elapsed time between defect
discovery and causal analysis be minimized.
21
Methods and Technologies
Causal Analysis/Defect Prevention Diagram
22
Methods and Technologies
Causal Analysis/Defect Historical Example
1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998
Structured
flows
Formal software
inspections
Formal inspection
moderators
Formalized
configuration control
Inspection improvements
Configuration Management
Data Base (CMDB)
Error Counting, Metrics
Formal Process
Error Cause Analysis
Process Analysis
Quality Management and Process Enhancement
Oversight analysis
Build automation
IBM process assessment
Formalized
requirements analysis
Quarterly quality
reviews
Prototyping
Inspection
improvements
Formal
requirements
inspections
Process
applied to
support
software
FSW (recon)
certification
LAN based development
environment
Enhanced test tools
Oracle based CMDB upgrade
ISO 9001 certification
Project manager process reviews
Process improvement database
Concurrent engineering
process model
Formalized milestone
risk assessments
Support software
inspection process
automation
Re-engineering
lessons learned
Web-based
troubleshooting
guide
On-line
programming
standards
SEI SPC
collabora-
tion
Reliability modeling
Process maturity measurements
Formalized training
Technical exchange seminars
Process evaluation
Software development
environment
Process document
assessment
Reliability/
complexity
research
Development
test philosophy
Inspection
process
tools
Groups_graphics_pubs_PASS_FSW_001.cv5
Structured
education
planning
Formalized
lessons
learned
23
Methods and Technologies
Reliability Prediction
• The IEEE Standard Glossary of Software Engineering Terminology
defines software reliability as: “The ability of the software to perform
its required function under stated conditions for a stated period of
time.”[1]
• The ability to predict the reliability of a software system enables high
quality assurance and helps determine readiness for release. Three
techniques are used:
– failure record,
– behavior for a random sample of input points,
– And quantity of actual and “seeded” faults detected during testing.
[1] Standards Coordinating Committee of the IEEE Computer Society, IEEE Standard Glossary of
Software Engineering Terminology, IEEE-STD-610.12-1990 (New York: IEEE, 1991).
24
Methods and Technologies
Statistical Process Control
• Statistical process control is the use of statistical methods
to assure both software quality. This technique is used to
determine if a process is out of defined parameters, thus
indicating process defects and/or potential for increased
product defects. These methods include:
– Pareto analysis,
– Shewhart control charts,
– histograms,
– and scatter diagrams.
25
Methods and Technologies
Shewhart Chart
Shewhart X-chart with control and warning limits
26
Methods and Technologies
Shewhart Chart
Shifting patterns: Shifts may result from introduction of
new programs and engineers, process changes, new
technologies and or software changes. They also could be
from a change in SQA methods, standards, or from a process
improvement.
Trending patterns: Trends are usually due to a gradual patching, data
degradation or, requirements instability, lack of process control. They
also could be from a change in SQA methods, standards, or from a
process improvement.
27
Case Study:
Space Shuttle Flight Software1
• The Space Shuttle Primary Avionics Software System Flight Software (PASS FSW)
project, United Space Alliance, produces highly reliable software for NASA’s Space
Shuttle onboard computers. It is a prime example of the effective implementation of
software quality assurance. It has twice received NASA’s Quality and Excellence
Award (the George M. Low Trophy). While with IBM, the project also received the
IBM Market Driven Quality Award for Best Software Laboratory. It has also been
assessed at SEI-CMM Level 5.
• To achieve the level of quality control desired, the PASS FSW project has mainly
relied on embedded SQA activities. Because of the NASA customer’s focus on the
delivery of error-free software, the FSW management team felt that this could only
be achieved by making every individual on the project responsible for the quality
of the delivered software. This decision led to embedding within the development
process activities designed to identify and remove errors from the software product and
to also remove the cause of those errors from the process.
1 STSC: Software Quality Assurance, Revision 0.2, April 6, 2000
28
SQA Case Study (cont.)
• Beginning in the mid-1970s, the project built on a foundation that included
competent and controlled software project management techniques and effective
configuration management (reference Figure 1. PASS FSW Process Improvement
History). The project started a practice that is still performed today: an error
oversight analysis process which entailed counting errors found, analyzing those
errors, correcting them and implementing process improvements to prevent the
recurrence of those errors. When this process began, errors in the released product
were examined. Focusing on those errors led to the realization of how expensive it
was to remove errors found late in the development life cycle.
• Formal design and code inspections were introduced in the late 1970s to enhance
the possibility for early discovery of errors, when it was cheapest to correct those
errors. When the number of errors in the released software decreased to such low
numbers that error trends were not evident, errors found during inspections were
also subjected to the same kind of oversight analysis. This led to improvements in
the inspection process and later a change in focus of development unit testing to
mirror expected customer usage of the software.
29
SQA Case Study
Figure 1: PASS FSW Process Improvement History
1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998
Structured
flows
Formal software
inspections
Formal inspection
moderators
Formalized
configuration control
Inspection improvements
Configuration Management
Data Base (CMDB)
Error Counting, Metrics
Formal Process
Error Cause Analysis
Process Analysis
Quality Management and Process Enhancement
Oversight analysis
Build automation
IBM process assessment
Formalized
requirements analysis
Quarterly quality
reviews
Prototyping
Inspection
improvements
Formal
requirements
inspections
Process
applied to
support
software
FSW (recon)
certification
LAN based development
environment
Enhanced test tools
Oracle based CMDB upgrade
ISO 9001 certification
Project manager process reviews
Process improvement database
Concurrent engineering
process model
Formalized milestone
risk assessments
Support software
inspection process
automation
Re-engineering
lessons learned
W eb-based
troubleshooting
guide
On-line
programming
standards
SEI SPC
collabora-
tion
Reliability modeling
Process maturity measurements
Formalized training
Technical exchange seminars
Process evaluation
Software development
environment
Process document
assessment
Reliability/
complexity
research
Development
test philosophy
Inspection
process
tools
Groups_graphics_pubs_PASS_FSW _001.cv5
Structured
education
planning
Formalized
lessons
learned
30
SQA Case Study (cont.)
• Formalization of the independent verification and validation process through
documenting verification techniques was performed in the early 1980s. Inspections
of the verifiers’ test procedures were added shortly thereafter, and the NASA
customer was invited to participate. Further enhancements in verification included
improved software interface analysis and the institution of functional test teams to
improve technical education.
• Continued error analysis identified requirements as a major source of errors. The
next major enhancement was a formalization of the requirements analysis process to
provide better requirements from which the developers and testers could work.
Formal requirements inspections soon followed in 1986. The developers and
verifiers participated with the requirements analysts in the inspections to help
produce requirements that were viable for both implementation and testing. The
requirements author and NASA customer were also invited to provide insights into
requirements intent and from the customer’s viewpoint. Later, these inspections
were enhanced to include the requirements author’s description of potential software
usage scenarios for major capabilities. All of these measures resulted in a significant
decrease in software errors due to requirements problems.
31
SQA Case Study (cont.)
• Even an organizational change driven by the NASA customer’s
direction to reduce personnel on the project was done in a way that
resulted in further error reduction. The same personnel who performed
the requirements analysis at the beginning of the software life cycle
were given the responsibility for completing the system performance
verification at the end of the development cycle. Their involvement in
both the beginning and the end of the life cycle provided additional
insights for error detection.
• As each process change had visible effects in reducing the errors in the
delivered product, continued process improvement was seen to be the
vehicle through which the goal of error-free software could someday
be realized. Process teams made up of participants in the processes
were created to own the processes and proactively look for process
improvements. These teams still function and are the major source for
process improvement ideas.
32
SQA Case Study (cont.)
• Independent SQA oversight has also played a role in the success of the PASS
FSW project. In the late 1980s, NASA customer requirements instituted an SQA
role that was detached from project management. This organization has primarily
used audit methods to ensure process compliance.
• Internal and external assessments have also positively influenced the project’s
software quality management, as well as confirmed the value of the embedded
SQA. These have included a 1984 IBM assessment by a team working under
Watts Humphrey; the assessment tool and methodology used later evolved in the
SEI-CMM. A similar assessment was done in 1988 by a team from NASA, JPL
and SEI affiliates; the project was assessed at the highest maturity level using the
SEI-CMM criteria. In the 1990s, the project received ISO 9001 certification.
The project was also evaluated according to criteria of NASA’s Quality and
Excellence Award (the George M. Low Trophy), and the IBM Market Driven
Quality Award, which is based on the Malcolm Baldrige criteria. All of these
appraisals have aided PASS FSW in identifying areas for improvement and
obtaining an industry perspective.
33
SQA Case Study (cont.)
• The results of the focus on process improvement is demonstrated in Figure 2.
PASS FSW Product Error Rate, which shows the decrease in released errors since
1984. The error rate goes from 1.8 per thousand lines of changed source code in
1985 to zero in the latest release. The software currently in use to support shuttle
missions has no errors attributable to the latest software release.
• Process improvements continue to be made. Improvements in support tool
developments such as tools which aid in analysis of software interfaces, and tools
to provide improved testing capabilities, also contribute to significant reductions
in error rates. Embedded SQA activities in all phases of the development life
cycle, management support of quality initiatives, employees empowered to own
and improve processes, and error cause analysis have all enabled the PASS FSW
project to meet its customer requirements for delivery of error-free software.
34
SQA Case Study
Figure 2: PASS FSW Product Error Rate.
PASS FSW PRODUCT ERROR RATE
Based on new/changed SLOCs
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
2.2
2.4
2.6
2.8
OI05
OI06
OI07
OI7C
OI8A
OI8B
OI8C
OI8D/F
OI20
OI21
OI22
OI23
OI24
OI25
OI26
OI26B
OI27
ProductErrors/KSLOC
35
Quality Attribute Design Primitives
Architecture Tradeoff Analysis
Initiative
36
Quality Attribute Design
Introduction
Over the last 30 years, a number of researchers and practitioners have
examined how systems achieve software quality attributes. Boehm and
the International Organization for Standardization (ISO) introduced
taxonomies of quality attributes [Boehm 78, ISO 91]. Bass, Bergey,
Klein, and others have analyzed the relationship between software
architecture and quality attributes [Bass 98, Bergey 01, Klein 99a, Klein
99b]. Various attribute communities have mathematically explored the
meaning of their particular attributes [Kleinrock 76, Lyu 96]. The
patterns community has codified recurring and useful patterns in
practice [Gamma 95]. However, no one has systematically and
completely documented the relationship between software architecture
and quality attributes.
37
Quality Attribute Design
Benefits
Systematically codifying the relationship between
architecture and quality attributes has several obvious
benefits:
• A greatly enhanced architectural design process – for both
generation and analysis.
• During design generation, an architect could reuse
existing analyses and determine tradeoffs explicitly
rather than on an ad hoc basis. Experienced architects do
this intuitively but even they would benefit from codified
experience. For example during analysis, architects could
recognize a codified structure and know its impact on
quality attributes.
38
Quality Attribute Design
Benefits (cont.)
• a method for manually or dynamically reconfiguring architectures to
provide specified levels of a quality attribute. Understanding the
impact of quality attributes on architectural mechanisms will enable
architects to replace one set of mechanisms for another when
necessary.
• The potential for third-party certification of components and
component frameworks. Once the relationship between architecture
and quality attributes is codified, it is possible to construct a testing
protocol that will enable third party certification.
39
Quality Attribute Design
Cause for Lack of Documentation
The lack of documentation stems from several causes:
• The lack of a precise definition for many quality attributes inhibits the
exploration process. Attributes such as reliability, availability, and
performance have been studied for years and have generally accepted
definitions. Other attributes, such as modifiability, security, and
usability, do not have generally accepted definitions.
• Attributes are not discrete or isolated. For example, availability is an
attribute in its own right. However, it is also a subset of security
(because denial of service attack could limit availability) and usability
(because users require maximum uptime). Is portability a subset of
modifiability or an attribute in its own right? Both relationships exist in
quality attribute taxonomies.
40
Quality Attribute Design
Cause for Lack of Documentation (cont.)
• Attribute analysis does not lend itself to standardization. There are
hundreds of patterns at different levels of granularity, with different
relationships among them. As a result, it is difficult to decide which
situations or patterns to analyze for what quality, much less to
categorize and store that information for reuse.
• Analysis techniques are specific to a particular attribute. Therefore, it
is difficult to understand how the various attribute-specific analyses
interact.
41
Quality Attribute Design
Background
A widely held premise of the software architecture
community is that architecture determines quality
attributes. This raises several questions:
• Is this premise true and why do we believe that quality
attribute behavior and architecture are so intrinsically
related?
• Can we pinpoint key architectural decisions that affect
specific quality attributes?
• Can we understand how architectural decisions serve as
focal points for tradeoffs between several quality
attributes?
42
Quality Attribute Design
Relationships of Architecture to Quality
In essence, software architectural design, like any other design
process, consists of making decisions. These decisions involve:
• Separating one set of responsibilities from another
• Duplicating responsibilities
• Allocating responsibilities to processors
• Determining how discrete elements of responsibilities cooperate and
coordinate
43
Quality Attribute Design
Relationships of Architecture to Quality (cont.)
By examining these decisions and their results, we can clarify the
relationship between quality attributes and software architecture.
and draw two conclusions:
1. At this level of design, much can be said about many quality attributes.
2. Very little, if anything, must be known about functionality in order to
draw quality attribute conclusions.
44
Quality Attribute Design
Quality Attributes and General Scenarios
Attribute communities have a natural tendency to expand. For
example, we do not care whether portability is an aspect of
modifiability or whether reliability is an aspect of security.
Rather, we want to articulate what it means to achieve an
attribute by identifying the yardsticks by which it is
measured or observed. To do this, we introduce the concept
of a “general scenario.” Each general scenario consists of :
• the stimuli that requires the architecture to respond
• the type of system elements involved in the response
• the measures used to characterize the architecture’s
response
45
Quality Attribute Design
Quality Attributes and General Scenarios (examples)
For example:
• A performance general scenario is: “Periodic messages
arrive at a system. On the average, the system must respond
to the message within a specified time interval.” This
general scenario describes “events arriving” and being
serviced with some “latency.” A general scenario for
modifiability states “changes arriving” and the
“propagation of the change through the system.”
• A security general scenario combines “threats” with a
response of “repulsion, detection, or recovery.” A usability
general scenario ties “the user” together with a response of
“the system performs some action.”
46
Quality Attribute Design
General Scenarios
• If an architect or community wants to characterize a particular attribute,
a general scenario can be developed to describe it. This addresses the
first two problems that we claimed inhibited codifying the relationship
between architecture and quality attributes ; that is, attribute definitions
and their relationships.
• General scenarios can apply to any system. For example, we can
discuss a change to system platform without any knowledge of the
system, because every system has a platform. We also can discuss a
change to the representation of a data producer (in a producer/consumer
relationship) without knowing anything about the type of data or the
functions carried out by the producer. This is not the same as saying
that every general scenario applies to every system.
47
Quality Attribute Design
General Scenarios (example)
For example:
• “The user desires to cancel an operation” is one usability general
scenario stimulus. This is not the case for every system. For some
systems, cancellation is not appropriate.
• Thus, when applying general scenarios to a particular system, the first
step is filtering the universe of general scenarios to determine those of
relevance. This is essentially the process of determining attribute-
specific requirements. Controlling (or at least significantly affecting) at
least one measure of an attribute is the task of an architectural
mechanism.
48
Quality Attribute Design
Architectural Mechanisms
Earlier we stated that architectural mechanisms directly contribute to
quality attributes. Examples of architectural mechanisms are the
data router, caching, and fixed priority scheduling. These
mechanisms help achieve specific quality attribute goals as defined by
the general scenarios.
Examples:
• The data router protects producers from additions and changes to consumers
and vice versa by limiting the knowledge that producers and consumers have of
each other. This affects or contributes to modifiability.
• Caching reduces response time by providing a copy of the data close to the
function that needs it. This contributes to performance.
• Fixed priority scheduling interleaves multiple tasks to control response time.
This contributes to performance.
49
Quality Attribute Design
Architectural Mechanisms(cont.)
• Each mechanism is targeted at one or more quality
attributes. Also, each mechanism implements a more
fundamental strategy to achieve the quality attribute. For
example:
– the data router uses indirection to achieve modifiability.
– Caching uses replication to achieve performance.
– Fixed priority scheduling uses pre-emptive scheduling to achieve
performance.
50
Quality Attribute Design
Architectural Mechanisms(cont.)
• We can also extend the list of mechanisms and
general scenarios to address a wide range of
situations.
– If we introduce a new general scenario, we may need
to introduce new mechanisms that contribute to it.
– If we discover new mechanisms for dealing with
existing general scenarios.
• In either case, we can describe the new
mechanism(s) without affecting any other
mechanism.
51
Quality Attribute Design
Levels of Abstraction for Mechanisms(cont.)
Mechanisms exist in a hierarchy of abstraction levels. For example:
– Separation is a mechanism that divides two coherent entities. This
represents a high level of abstraction.
– A more concrete mechanism, and lower level of abstraction, would
specify entities, such as separating data from function, function from
function, or data from data.
• In this case, each level has meaning and utility with respect to
modifiability.
• Increasing detail increases the precision of analysis. However, because
the goal is establishing design primitives, the highest level of
abstraction that still supports some reasoning should be maintained.
52
Quality Attribute Design
Levels of Abstraction for Mechanisms(cont.)
What qualifies as an analysis?
• By definition, an architectural mechanism helps
achieve quality attribute goals.
– There must be a reason why the mechanism supports
those goals. Articulating the reason becomes our
analysis. If we cannot state why a particular
mechanism supports a quality attribute goal, then it
does not qualify as a mechanism.
– Choosing the correct level of granularity allows us to
examine a property at its most fundamental level.
53
Quality Attribute Design
Desired Side Effects and Attributes
Mechanisms have two consequences.
• They help general scenarios achieve attributes and they can
influence other general scenarios as side effects. These two
consequences, drive two types of analysis.
– The first analysis will explain how the architectural mechanism
helps the intended general scenario achieve its result.
– The second analysis will describe how to refine the general
scenario in light of the information provided by the mechanism.
This analysis reveals side effects that the mechanism has on the
other general scenarios. By referring to the specific mechanism
that directly affects an attribute, a more detailed analysis of the
side effect can be performed.
54
Quality Attribute Design
Desired Side Effects and Attributes (cont.)
Use encapsulation as our example. We will analyze the modifiability
general scenarios that reflect changes in function, environment, or
platform.
• As long as changes occur inside the encapsulated function (that is,
hidden by an interface), the effect of these changes will be limited. If
the expected changes cross the encapsulation boundary, the effect of
the changes is not limited.
– The analysis/refinement for performance general scenarios seeks to
determine whether encapsulation significantly affects resource usage of
the encapsulated component.
– The analysis/refinement for reliability/availability general scenarios asks
whether encapsulation significantly affects the probability of failure of
the encapsulated component. A more detailed description of the
performance or reliability analyses would be found in appropriate
performance or reliability mechanism write-ups.
55
Quality Attribute Design
Desired Side Effects and Attributes (cont.)
• Table 1 shows the relationship between mechanisms and
general scenarios. If mechanisms are across the top, and
general scenarios are down the left side, then each cell
has one of three possibilities:
1. The mechanism contributes to this general scenario.
2. The mechanism has no affect on the general scenario.
3. The mechanism introduces side effects—questions to be
answered by other mechanisms.
56
Quality Attribute Design
Desired Side Effects and Attributes (cont.)
57
Quality Attribute Design
Work Product Templates
So far, we have described two concepts in this note:
1. General scenarios that characterize quality attributes
2. Architectural mechanisms
• In this section, the templates to document attribute stories and
mechanisms and the data that each template section should
contain are presented.
• To understand how architectural mechanisms achieve quality
goals, an understanding quality attributes themselves must
first be obtained. An attribute story helps us do just that. It
creates operational definitions for quality attributes.
58
Quality Attribute Design
Work Product Templates (cont.)
• An attribute story presents the issues associated
with an attribute and the mechanisms that could
help achieve it. It is assumed that the architect
already knows the material in the attribute story.
The story simply acts as a checklist for the
designer. Parts include:
Start
here
59
Quality Attribute Design
Work Product Templates (cont.)
1. Concepts of the Attribute
– This section describes the attribute including relevant
stimuli and typical response measures.
2. General Attribute Scenarios
– This section presents a number of system-independent
or general scenarios. General scenarios describe the
attributes. These high-level scenarios could apply to
any system. Depending on the general scenario, the
system can be characterized by mechanisms, modules,
or descriptions of the hardware platform.
60
Quality Attribute Design
Work Product Templates (cont.)
3. Strategies for Achieving Attribute
– This section describes the basic strategies used to
achieve the general scenarios. This sets the foundation
for the next section of the template.
4. Enumeration of Mechanisms for Achieving
Attribute.
– This section lists the names and describes the
mechanisms used to achieve the attribute. Each
mechanism will have its own description. The brief
descriptions here serve as a checklist for architects.
61
Quality Attribute Design
Mechanism Templates
• A mechanism template provides information the
architect needs to reason about a specific
mechanism for the five attributes under
discussion.
62
Quality Attribute Design
Mechanism Templates (cont.)
1. Description of the Mechanism
– This section describes a particular mechanism.
2. Analysis of the Mechanism
– This section describes the strategy that the mechanism
used to achieve attribute goals. It also presents an
argument that the mechanism does, in fact, support
one or more general scenarios. The following sections
apply to the attributes whose properties exist as side
effects to the mechanism.
63
Quality Attribute Design
Mechanism Templates (cont.)
3. Modifiability General Scenarios
– This section interprets modifiability general scenarios
through the prism of the mechanism. Description of
the Mechanism
4. Performance General Scenarios
– This section interprets performance general scenarios
through the prism of the mechanism.
5. Reliability/Availability General Scenarios
– This section interprets reliability/availability general
scenarios through the prism of the mechanism..
64
Quality Attribute Design
Mechanism Templates (cont.)
6. Security General Scenarios
– This section interprets security general scenarios
through the prism of the mechanism.
7. Usability General Scenarios
– This section interprets usability general scenarios
through the prism of the mechanism.
65
Quality Attribute Design
Mechanisms
Mechanisms
• Data Router
• Fixed Priority Scheduling
66
Quality Attribute Design
Mechanisms (cont.)
Data Router
• The data router (also called a data bus or a data distributor)
mechanism implements the strategy of data indirection. It
inserts a mediator between a data item producer and
consumer. By providing a mediator, the data producer has
embedded knowledge that it must send the data item to the
mediator. The mechanism that the data router uses to
recognize consumers for a data item is discussed separately
(see data registration). The following figure describes this
mechanism.
67
Quality Attribute Design
Mechanisms (cont.)
Data Router
68
Quality Attribute Design
Mechanisms (cont.)
Data Router (cont.)
• Producers are components that generate messages. Consumers are
components that receive messages. It is possible for a component
to both produce and consume different types of messages. The
router directs messages of particular type to consumers of that type.
We do not show the registration mechanism that the data router
uses to identify producers and consumers.
• Sometimes the data router also converts data. For example,
suppose the data producer of a speed measurement supplies it in
kilometers per hour and the consumer wishes to receive it in miles
per hour. To perform the conversion, the data router must know the
units in which the data item is produced and the units that the
consumer desires.
69
Quality Attribute Design
Mechanisms (cont.)
Purpose of this Mechanism
• The data router mechanism reduces the coupling between the data
producer and consumer. This simplifies the process of adding new
consumers or producers of a single type of data.
• The stimulus in the following general scenarios changes function,
platform, or environment:
• Adding/removing/modifying a producer/consumer of a data item within a
known data type. Adding or removing a producer/consumer of a data item
should not affect the producer, since the producers and consumers do know
each other. The new consumer or producer must be registered to the data
router. (Registration is outside the scope of this mechanism.) If the producer
being removed is the only element that supplies that data type, then
consumers will not receive any data. Presumably, that is the intent of the
modification.
70
Quality Attribute Design
Mechanisms (cont.)
Purpose of this Mechanism (cont.)
• Adding a new data type to a producer of a data item. Adding a new
producer data type should not affect any existing consumers since
they can remain ignorant of the new data and its producers. The
mediator may need to be modified if it has to transform the new
data type. None of the existing producers are also affected by
adding a new producer.
• Adding a new conversion type to the data router. The data router
needs to be modified to perform the new conversion. Existing
conversions and producers and consumers are not modified.
• Modifying the data router. The effect of modifying the data router
depends, obviously, on the type of modification. Adding a new data
type should not affect existing data types. Modifying the interfaces
of the data router will require all producers and consumers to be
modified.
71
Quality Attribute Design
Mechanisms (cont.)
Performance General Scenario:
• What is the mean/largest/minimum size (bytes)/distribution of data items?
• Can multiple messages arrive at the mediator simultaneously?
• What is the maximum/mean number of consumers for each data item?
• What is the delay introduced by the mediator for the consumer of a data item?
• Are messages produced periodically (at what period) or episodically (at what rate
and distribution)?
• Are the producers and consumers in the same process or in different processes?
• If the producers and consumers are in different processes (or on different
processors) what is the latency of message delivery? What are the resources
involved in message delivery?
• How are producers and consumers synchronized?
72
Quality Attribute Design
Mechanisms (cont.)
Security General Scenario:
• Can eavesdroppers listen to the transmission of any data items to/from
the data router?
• Is it possible for unauthorized producers to send messages to the editor
or for unauthorized consumers to receive messages from the mediator?
Availability/Reliability General Scenario:
• What is the likelihood/impact if a produced message does not arrive?
• What is the likelihood/impact of a message not being produced in time?
73
Quality Attribute Design
Mechanisms (cont.)
Usability General Scenario:
• What data items sent to the mediator should be visible to the user?
• What feedback should the user receive about the messages sent
to/from the mediator?
• What needs to be recorded in order to cancel, undo, or to evaluate
the system for usability?
74
Quality Attribute Design
Mechanisms (cont.)
Fixed Priority Scheduling
• Process scheduling is a common mechanism provided by most operating
systems. It automatically interleaves execution of units of concurrency
(such as processes and/or threads). Fixed priority scheduling is a uni-
processor scheduling policy that prioritizes each process/thread in the
system. The scheduler uses this priority to decide which process to
execute. It assigns the processor to the highest priority process/thread
that is not currently blocked.
• The following figures presents a module view of a fixed priority
scheduler and a sequence diagram of a fixed priority scheduler with two
tasks. The high-priority task executes until it blocks. The next priority
task then executes until the high-priority task is ready to execute again.
75
Quality Attribute Design
Mechanisms (cont.)
Fixed Priority Scheduling (cont.)
76
Quality Attribute Design
Mechanisms (cont.)
Fixed Priority Scheduling (cont.)
77
Quality Attribute Design
Mechanisms (cont.)
Purpose of this Mechanism
• The purpose of any scheduling mechanism is two-fold:
1. separate the management of concurrency and the concomitant performance (i.e.,
timing) concerns from the concerns of implementing of functionality. This
allows the architect to design a structure that reflects the “natural
concurrency” of the system’s environment (such as handling multiple
message types, or sensor types that have disparate timing properties).
2. allocate a single processor to event stimuli in the manner required by their
timing requirements.
• Fixed priority scheduling allows for a level of predictability that
corresponds to the timing estimates associated with the various
processing scenarios. This predictability is due to scheduling analysis
techniques (such as rate monotonic analysis).
78
Quality Attribute Design
Mechanisms (cont.)
Purpose of this Mechanism (cont.)
• The purpose of any scheduling mechanism is two-fold:
1. separate the management of concurrency and the concomitant performance (i.e.,
timing) concerns from the concerns of implementing of functionality. This
allows the architect to design a structure that reflects the “natural
concurrency” of the system’s environment (such as handling multiple
message types, or sensor types that have disparate timing properties).
2. allocate a single processor to event stimuli in the manner required by their
timing requirements.
• Fixed priority scheduling allows for a level of predictability that
corresponds to the timing estimates associated with the various
processing scenarios. This predictability is due to scheduling analysis
techniques (such as rate monotonic analysis).
79
Quality Attribute Design
Mechanisms (cont.)
Purpose of this Mechanism (cont.)
• The set of general scenarios addressed by this mechanism are:
<Events arrive periodically or sporadically> and must be completed <in the
worst-case > <within a time interval and/or subjected to data loss
constraints> <while executing on a uniprocessor>.
• We will make two assumptions for this analysis:
1. The dominant character of processes are deterministic.
2. The only effects of concern for this mechanism derive from allocating the
processor to processes. (The effects of processes sharing data, acquiring
memory, or using hardware devices are not considered here.)
80
Quality Attribute Design
Mechanisms (cont.)
Purpose of this Mechanism (cont.)
• Simplest case:
• In this simple case, the processes are the only sources of execution time, the processes are
pre-emptible by higher priority processes, and no pipelines exist in the topology.
• As you can see, only two factors affect the latency of a process, (1) its own execution time
and (2) preemption due to higher priority processes. Latency can be computed by
determining the fixed point of the following expression where,
• Ci is the execution time of process, i the process whose latency is being computed, and the
term with the summation computes the effects of preemption:
81
Quality Attribute Design
Mechanisms (cont.)
Performance Considerations
• The following questions are specifically relevant to fixed
priority scheduling:
6. What is the execution time of processes? Is the maximum known?
7. What are other sources (other than the processes) of execution time
(such as context switch, dispatching, or daemons for garbage
collection)?
8. How many priority levels does the scheduler support?
9. What are the sources of interrupts or non-pre-emptible sections?
82
Quality Attribute Design
Mechanisms (cont.)
Performance Considerations (cont.)
6. How often do the processes execute? Is there a minimum amount of
time between executions?
7. What is the flow of control/data between processes?
8. Are there various modes in which the above characteristics change?
9. What is the stochastic character of processing frequency and execution
time?
83
Quality Attribute Design
Mechanisms (cont.)
Modifiability Considerations
• In general, any of the performance parameters mentioned in the
preceding section might change. Therefore, the following
questions could be asked:
1. What is the impact of adding or deleting new processes?
2. What is the impact of changing process priorities?
3. What is the impact of changing process execution times?
4. What is the impact of changing event arrival rates?
5. What is the impact of switching operating systems?
84
Quality Attribute Design
Mechanisms (cont.)
Reliability Considerations
1. How can errors propagate between processes?
2. Do processes run in their own address space?
3. What happens if processes exceed their so-called worst-case
execution estimates?
85
Quality Attribute Design
Mechanisms (cont.)
Security Considerations
1. Can processes be killed and then spoofed?
2. Can new processes be created (with the intent of causing denial
of service)?
86
Quality Attribute Design
Mechanisms (cont.)
Usability Considerations
1. How is average response time managed?
2. Do any of the processes generate data from which the user
gets feedback?
3. Is consideration of this feedback data included in the setting
of deadlines?
87
Quality Attribute Design
Attribute Stories
• The Modifiability Story
• The Performance Story
88
Quality Attribute Design
Modifiability Story (cont.)
Concepts of Modifiability Story
• Modifiability is the ability of a system to be changed
after it has been deployed. In this case, the stimulus is
the arrival of a change request and the response is the
time necessary to implement the change. Requests can
reflect changes in functions, platform, or operating
environment. A request could also modify how the
system achieves its quality attributes. These changes
can involve the source code or data files. They can be
made at compile time or run time.
89
Quality Attribute Design
Modifiability Story (cont.)
A request arrives to change the functionality of the system. The change
can be to add new functionality, to modify existing functionality, or to
delete a particular functionality.
• The hardware platform is changed. The system must be modified to continue to
provide current functionality. There may be a change in hardware including input and
output hardware, operating system, or COTS middleware.
• The operating environment is changed. For example, the system now has to work with
systems previously not considered. It may have to operate in a disconnected mode. It
may have to dynamically discover what devices are available, or it may have to react
to changes in the number or characteristics of users.
• A request arrives to improve a particular quality attribute such as reliability,
performance, or usability. The system should be modified to achieve better usability,
for instance.
• A new distributed user arrives and wishes to utilize just some of the services available.
• As a result of quality considerations, some portion of the system modifies its behavior
during run time.
90
Quality Attribute Design
Modifiability Story (cont.)
The general form for modifiability scenarios is:
Time of modification ::= Compile time | run time
Type of modification ::= Add | modify | delete
Target of modification ::= Functionality | platform (hardware, OS, middleware) |
environment (interoperate, disconnect, #users), quality attribute
91
Quality Attribute Design
Modifiability Story (cont.)
Strategies that achieve modifiability:
• At compile time, modifiability is achieved through three basic
strategies.
1. Indirection – Indirection involves inserting a mediator, such as a data router, to
allow variation within a particular area of concern. For example, when
producers and consumers of data communicate via a mediator, they typically
have no direct knowledge of each other. This means that additional producers
or consumers could be added without change to the consumers or producers.
A virtual device or machine can also serve as a mediator in the indirection
strategy.
2. Separation – This strategy separates data and function that address different
concerns. Since the concerns are separate, we can modify one concern
independently of another. Isolating common function is another example of a
separation strategy.
92
Quality Attribute Design
Modifiability Story (cont.)
3. Encoding function into data meta-data and language interpreters –
By encoding some function into data and providing a mechanism
for interpreting that data, we can simplify modifications that affect
the parameters of that data.
• At run time, modifiability is achieved using a strategy of dynamic
discovery and reconfiguration. This strategy involves discovering the
resources or functions that become dynamically available as the
system’s environment changes, negotiating for access to those
resources, then accessing the resource.
93
Quality Attribute Design
Modifiability Story (cont.)
Mechanisms that achieve Modifiability
• data router (Also called a data bus or a data distributor) – This
intermediary mechanism directs (and possibly transforms) data from
producers to consumers. It requires a supporting mechanism to identify
producers and consumers.
• data repository – This mechanism stores data for later use.
• virtual machine – This mechanism places an intermediary between
function users and producer(s).
94
Quality Attribute Design
Modifiability Story (cont.)
Mechanisms that achieve Modifiability (cont.)
• independence of interface from implementation – This mechanism
allows architects to substitute different implementations for the same
functionality. Interface definition languages (IDLs) are an example of
this mechanism.
• interpreter – This mechanism involves encoding function into
parameters and more abstract descriptions to allows architects to easily
modify them.
• client-server – This mechanism involves providing a collection of
services from a central process and allowing other processes to use
these services through a fixed protocol.
95
Quality Attribute Design
Modifiability Story (cont.)
Mechanisms that achieve Modifiability (cont.)
• independence of interface from implementation – This mechanism
allows architects to substitute different implementations for the same
functionality. Interface definition languages (IDLs) are an example of
this mechanism.
• interpreter – This mechanism involves encoding function into
parameters and more abstract descriptions to allows architects to easily
modify them.
• client-server – This mechanism involves providing a collection of
services from a central process and allowing other processes to use
these services through a fixed protocol.
96
Quality Attribute Design
Modifiability Story (cont.)
Mechanisms that achieve Modifiability (cont.)
• independence of interface from implementation – This mechanism
allows architects to substitute different implementations for the same
functionality. Interface definition languages (IDLs) are an example of
this mechanism.
• interpreter – This mechanism involves encoding function into
parameters and more abstract descriptions to allows architects to easily
modify them.
• client-server – This mechanism involves providing a collection of
services from a central process and allowing other processes to use
these services through a fixed protocol.
97
Quality Attribute Design
Performance Story
Concepts of Performance
• Performance involves adjusting and allocating resources to meet system
timing requirements. The stimulus can be any event that requires a
response, such as the arrival of a message, the expiration of an interval
of time (as signaled by a timer interrupt), the detection of a significant
change in the system’s environment, a user input event, etc.
• In this situation, the system consists of a collection of processors
connected by a network, involving “schedulable entities” such as
processes, threads, and messages.
98
Quality Attribute Design
Performance Story (cont.)
Concepts of Performance
• Important response measures include
–Latency – the amount of time that passes from the event occurrence
until the completed
–Response. Average-case latency matters and worst-case latency are
variants of latency measures
–Throughput – the number of events responded to per unit of time
–Precedence – preserving ordering constraints on the responses to
events
99
Quality Attribute Design
Performance Story (cont.)
Concepts of Performance (cont.)
• Jitter – variance in the time interval between response completions
• Growth capacity – ability to add load while meeting timing
requirements
At a higher level, performance concerns the ability to predictably
provide various levels of service (measured in the above terms) under
varying circumstances. Performance depends upon the load to the
system and the resources available to process the load.
100
Quality Attribute Design
Performance Story (cont.)
The general form of general performance scenarios is
• <Events arrive periodically, sporadically or randomly> and must be completed <in the
worst-case or on the average> <within a time interval and/or subjected to data loss
constraints and/or subjected to throughput constraints and/or constrained by order
relative to other event responses and/or adhering to jitter constraints>
• A sample of general performance scenarios assuming unchanging resource
requirements follows:
• Periodic or sporadic events (that is, events with bounded inter-arrival times) arrive. In
all cases, the system must execute its response to the event within a specified time
interval. (One common example is the time interval that starts when the event occurs
and ends one period later.)
• Periodic or sporadic events arrive. In all cases, no more than n/m events can be lost.
• Periodic or sporadic events arrive. On average the system must execute its response to
the event within a specified time interval..
101
Quality Attribute Design
Performance Story (cont.)
The general form of general performance scenarios is (cont.)
• Periodic or sporadic events arrive. On average no more than n/m events can be
lost.
• Events arrive stochastically. On average the system must execute its response
to the event within a specified time interval. The response, however, should
never be greater than a specified upper bound.
• Events arrive stochastically. On average the system must complete n responses
per unit time.
102
Quality Attribute Design
Performance Story (cont.)
The general form of general performance scenarios is (cont.)
• We can extend the above general performance scenario template to include
cases where the resource requirements change. These cases include
• Periodic or sporadic events arrive.
• In all cases, the system must carry out its response to the event within a
specified time interval by executing n nodes in a network, where each node
consists of m processors on a bus and each processor is executing p processes.
If a subset of the resources fails, the most critical subset of event responses
must continue to be met.
103
Quality Attribute Design
Performance Story (cont.)
The most general form for a performance scenario considers many or all of the
following aspects:
• Arrival pattern ::= periodic | sporadic | random
• Latency requirement ::= worst-case | average-case [with upper bound] |
window
• Throughput requirement ::= n completions per unit time
• Data loss requirement ::= no more than m of n can be lost
• Platform ::= uniprocessor | distributed
• Resources ::= processors, networks, buses, memory
• Mode ::= (resource driven | load driven), (discrete | continuous)
104
Quality Attribute Design
Performance Story (cont.)
Strategies that achieve performance:
• Performance is achieved by managing resources (such as CPUs, buses,
networks, memory, data, etc.) that affect system response. That is, timing
behavior is determined by allocating resources to resource demands, choosing
between conflicting requests for resources, and managing resource usage.
Therefore, there are three key techniques for managing performance:
1. Resource allocation – These are policies for realizing resource demands by
allocating various resources; for example, deciding which of several available
processors and process will execute.
105
Quality Attribute Design
Performance Story (cont.)
Strategies that achieve performance (cont.):
2. resource arbitration – These are policies for choosing between various
requests of a single resource; for example, deciding which of several
processes will execute on a processor. Arbitration includes preemptive
strategies and exclusive use strategies.
3. Resource usage – These are strategies for affecting resource demands; for
example, deciding how to reduce the execution time needed to respond to one
or more events, or deciding not to execute certain tasks in periods of intense
computation of high-priority tasks.
Each of these aspects affects the completion time of the response. As a result,
each aspect can be viewed as a category of mechanism.
106
Quality Attribute Design
Performance Story (cont.)
Mechanisms that Achieve Performance (cont.):
In general, performance can be achieved through resource allocation, resource
arbitration, and resource use.
Resource allocation
1. Load balancing – spreading the load evenly between a set of resources
2. Bin packing – based on measures of spare capacity (for example, schedulable
utilization) Bin packing maximizes the amount of work that can be performed by
a collection of resources..
Resource arbitration (Preemptive)
1. Dynamic priority scheduling – allows the priority of a process to vary from one
Invocation to the next (e.g., earliest deadline first scheduling)
107
Quality Attribute Design
Performance Story (cont.)
Mechanisms that Achieve Performance (cont.):
3. fixed priority scheduling – the priority of processes are fixed from one invocation to the
next (e.g., rate monotonic analysis or deadline monotonic)
4. Aperiodic service strategies – mechanisms for scheduling aperiodic processes in a
periodic context (e.g., slack stealing)
5. Static scheduling – timelines which are calculated prior to runtime (e.g., cyclic
executive)
6. Queue overwrite policy – mechanism for handling overflow in a bounded queue (e.g.
Overwrite oldest)
7. QoS-based methods – methods that explicitly consider quality of service in scheduling
decisions (e.g., QRAM)
8. Overload management/load shedding – mechanisms that determine which processes
receive resources when the system cannot service all requests
108
Quality Attribute Design
Performance Story (cont.)
Mechanisms that Achieve Performance (cont.):
(Exclusive use)
9. Synchronization policies – mechanisms for managing mutually exclusive
access to shared resources (e.g., priority inheritance)
Resource use
Caching – using a local copy of data to reduce access time
109
Software Quality Assurance
Defect Prevention
110
Defect Prevention
• Defect Prevention is the activities required
to isolate and correct software design and
implementation issues that could lead to a
defect in the software that will require
rework to make the software operational at
the quality level specified.
111
Defect Prevention
• Formal defect prevention activities should
be embedded in the software life-cycle
process, which will empower the
development team and the SQA
organization to analyze the causes of
defects, and ensure corrective process
improvements, to prevent future defect
insertion and enhance the detection process,
are implemented.
112
Defect Prevention
• Defect prevention directly relates to the quality of
the development process.
• The degree of prevention is dependent on the
degree of process improvement accomplished
throughout the development.
• In General it is found that 34% of a project’s effort
is spent designing software, 12% on coding, and
22% on testing. The remaining 44% was spent
fixing problems that can been avoided with better
upfront design.
113
Defect Prevention
Requirements
6%
Preliminary
Design
12%
Detailed
Design
16%
Code
and
Unit Test
12%
Integration
and
System Test
10%
REWORK
44%
114
Defect Prevention
Require-
ments
Prelim-
inary
Design
Detailed
Design
Code
and
Unit Test
Integration
and
System
Test
REWORK
44%
1%
6% 12% 16% 12% 10%
4%
8% 12%
19%
Development cost per phase
Rework cost per phase
115
Defect Prevention
• Defects happen! If we cannot build defect free software the first-time,
then the next best way to build good software is to learn from our
mistakes and build it right the next time.
• Defect causal analysis is the best method for improving software
quality from the SQA perspective. It is an orderly technique for
identifying causes of problems and preventing defects resulting from
them. Causal analysis focuses on both defect discovery and on what
caused the defect to occur.
• The importance of this SQA approach is that it provides the necessary
feedback for the improvement of tool use, software engineering
training and education, and ultimately, the development process itself.
116
Defect Prevention
• Perhaps the most important attribute of software quality
assurance is defect causal analysis. Quality software
demands that defects be eliminated and prevented, not only
from all products delivered to users, but also from all the
processes that create those products.
• Finding and correcting mistakes makes up an inordinately
large portion of total software development cost. As we
have seen the level of effort for rework to correct defects is
typically 40% to 50% of total software development.
117
Defect Prevention
• Perhaps the most important attribute of software quality
assurance is defect causal analysis. Quality software
demands that defects be eliminated and prevented, not only
from all products delivered to users, but also from all the
processes that create those products.
• Finding and correcting mistakes makes up an inordinately
large portion of total software development cost. As we
have seen the level of effort for rework to correct defects is
typically 40% to 50% of total software development.
118
Defect Prevention
• The following figure illustrates the causal analysis process:
– At the start of each development phase, an entrance conference is
held to review deliverables from the previous phase, to review
process methodology guidelines and standards, and to set the team
quality goals. These exercises start the defect prevention process,
since they concentrate the team’s attention on the process-oriented
development details soon to be performed.
– Throughout each phase, audits, reviews, and formal peer
inspections are scheduled, in addition to walkthroughs by
corporate management. During these reviews, each work-product
is checked for defects and rework is performed (as required) with
follow-up reviews.
119
Defect Prevention
120
Defect Prevention
• The most important activity during this process is the preliminary
causal analysis of the defects themselves.
• During causal analysis meetings, individual problems are isolated and
analyzed (somewhere between 1-20 defects can be scrutinized).
• Each defect item is then added to a database where it is tracked by its
description, how it was resolved, and the preliminary analysis of its
cause.
• Another meeting is held at the end of the development phase which
consists of a brainstorming session. The purpose of this meeting is to
analyze defect causes, evaluate results versus goals set at the start of
the phase, and to develop a list of suggested process improvements.
121
Defect Prevention
• The SQA team (e.g., people from the tools, training, development, support,
and testing group) must then respond to these suggestions. The process
action team is responsible for:
– Action item prioritization,
– Action item status tracking,
– Action item implementation,
– Dissemination of feedback,
– Causal analysis database administration,
– Analysis of defects for generic classification,
– and success story visibility.
122
Defect Prevention
• In a sample program a causal analysis tool is used to
predict the total number of defects in core software
modules and the detection rate for finding those problems.
• Use of the model has greatly reduced software defect
insertion over the span software project The figures
illustrates how the defect insertion rate progressively
decreased by 1/3rd between production Blocks 30 and 40
and again between Blocks 40 and 50.
123
Defect Prevention
(Block 30B Software Defect Density = 9.8 Defects/thousand delivered source instructions (KDSIs))
Block 40B Software Defect Density = 1/3 reduction in software defect insertion rate from Block 30B
Block 50B Software Defect Density = 1/3 reduction in software defect insertion rate from Block 40B
124
Defect Prevention
Defect Removal Efficiency
• An anomaly about defect detection in large software-intensive systems was
identified in studies performed by major corporations. The aberration about the
defects they found was that they were not randomly distributed throughout software
applications.
• Defects become clumped into localized sections, called “defect-prone modules.” As
a rule, about 5% of the modules in large software systems will be infected with
almost 50% of reported defects.
• Once those modules are identified, the defects are usually removable. High
complexity and poor coding procedures are often the cause of defect coagulations in
defect-prone modules.
• One useful product of defect measurement is called “defect removal efficiency.”
This cumulative measure is defined as the ratio of defects found prior to delivery of
the software system to the total number of defects found throughout development
• [JONES91] If modules are kept small (e.g., 100 SLOC), it is often cheaper and
faster to just rewrite the module rather than search for and remove its defects..
125
Defect Prevention
Defect Removal Efficiency (cont.)
• A product of defect measurement is called “defect removal
efficiency.” This cumulative measure is defined as the ratio of defects
found prior to delivery of the software system to the total number of
defects found throughout development
• It can be seen that small modules are often cheaper and faster to just
rewrite rather than search for and remove its defects.
Defect movalEfficiency
DefectsFound iorToDelivery
TotalDefectsFound
Re
Pr

126
Defect Prevention
Defect Removal Efficiency (cont.)
• The defect removal efficiency indicator gives the cumulative percent of
how many previously injected defects have been removed by the end
of each development phase.
• Since the cost of defect removal roughly doubles with each phase,
early removal must be a process improvement priority. With this goal,
each newly delivered software product can be expected to be of
significantly higher quality than its predecessor.
• Increases in defect removal efficiency can be accomplished through
improving the code inspection and unit testing processes.
127
Defect Prevention
If the defect cannot be detected and does not show itself as
output, why bother removing it?
• With mission and safety critical software operating under maximum
stressed conditions, the chances of a latent defect-related software failure
often increases beyond acceptable limits.
• This dichotomy amplifies the need to detect, remove, and ultimately
prevent the causes of errors before they become illusive software defects.
• Latent, undetected defects have the tendency to crop up when the software
is stressed beyond the scope of its developmental testing. It is at these
times, when the software is strained to its maximum performance, that
defects are the most costly or even fatal.
128
Defect Prevention
• For systems where failure to remove defects before delivery
can have catastrophic consequences in terms of failed missions
or the loss of human life, defect removal techniques must be
decidedly intense and aggressive.
• For instance, because the lives of astronauts depend implicitly
on the reliability of Space Shuttle software, the software
defect removal process employed on this program has had a
near perfect record. Of the 500,000 lines-of-code for each of
the six shuttles delivered, there was a zero-defect rate of the
mission-unique data tailored for each shuttle mission, and the
source code software had 0.11 defects per KLOC.
129
Defect Prevention
Space Shuttle Defect Removal Process Improvement
Process
Element
A
Process
Element
B
Process
Element
C
Process
Element
D
Product
2
ROOT
CAUSE
DEFECT
INTRODUCED
3
DEFECT
ESCAPED
DETECTION
3
DEFECT
ESCAPED
DETECTION
4
SIMILAR
ADDITIONAL
UNDETECTED
DEFECTS
ORIGINAL DEFECT1STEPS PERFORMED FOR EVERY DEFECT
(REGARDLESS OF MAGNITUDE)
(1) Remove defect
(2) Remove root cause of defect
(3) Eliminate process escape deficiency
(4) Search/analyze product for other, similar escapes
130
Defect Prevention
• Studies show that 31.2% of the defects found
during system testing are inserted during the
requirements analysis and design phases, 56.7%
are inserted during coding, and 12.1% are inserted
during unit testing and integration.
• The number of defects at the completion of unit
testing, and the number of defects delivered to the
field, are well documented industry averages.
131
Defect Prevention
Requirements Design Code Unit Test Integration
Test
System
Test
20
40
100
20
50
10
Defects per KLOC
Fielded
defects
per KLOC
Requirements Design Code Unit Test Integration
Test
System
Test
5 (20)
8 (40)
15 (100)
3 (20)
7 (50)
1 (10)
Reduced
Defects per KLOC
Fielded
defects
per KLOC
Inspection
Inspection
Inspection
Industry Average
Defect Profile
Defect Profile with
Inspections
60% of software defects can be traced back to the design process.
132
Defect Prevention
• In addition to the gains in quality, inspections produce
corresponding gains in productivity as the amount of
rework needed to fix defects is greatly reduced.
PEOPLERESOURCES
Planning
Requirements
Design Coding Testing Delivery
SCHEDULE
Without
inspections
With
inspections
133
Defect Prevention
• What is the Generic Defect Prevention Process?
Establish and Maintain
Defect Prevention
Mechanism
Perform Defect
Prevention Activities
Status
Sponsorship
Infrastructure
Oversight
Implement
Organizational Actions
Mechanism
Status
Sponsorship
Infrastructure
Oversight
Organizational
Issues
Organizational
Issues
Organizational
Issues
Organizational
Learning
Organization’s
Process Assets Process
Process Improvement
Project’s
Process Assets
Process
Process Improvement
134
Software Quality Assurance
Key Software Metrics
135
Key Software Metrics
A Rigorous Framework for Software Measurement
• The software engineering literature abounds with software ‘metrics'
falling into one or more of the categories described above. So
somebody new to the area seeking a small set of 'best' software metrics
is bound to be confused, especially as the literature presents conflicting
views of what is best practice. In the section we establish a framework
relating different activities and different metrics. The framework
enables readers to distinguish the applicability (and value) of many
metrics. It also provides a simple set of guidelines for approaching any
software measurement task.
136
Key Software Metrics
Measurement is the process by which numbers or symbols are
assigned to attributes of entities in the real world in such a way as
to characterize them according to clearly defined rules. The
numerical assignment is called the measure.
• The theory of measurement provides the rigorous framework for
determining when a proposed measure really does characterize the
attribute it is supposed to.
• The theory also provides rules for determining the scale types of
measures, and hence to determine what statistical analyses are
relevant and meaningful. We make a distinction between a
measure (in the above definition) and a metric.
• A metric then is a proposed measure. Only when it really does
characterize the attribute in question can it truly be called a
measure of that attribute.
137
Key Software Metrics
• Some Key Metrics
– Complexity Metrics
– Resource estimation models
– Metrics of Functionality: Albrecht's Function Points
– Incident Count Metrics
138
Key Software Metrics
Complexity Metrics
• Prominent in the history of software metrics has been
the search for measures of complexity. Because it is a
high-level notion made up of many different attributes,
there may never be a single measure of software
complexity. There have been hundreds of proposed
complexity metrics. Most of these are also restricted to
code. The best known are Halstead's software science
and McCabe's cyclomatic number
139
Key Software Metrics
Complexity Metrics (cont.)
Halstead’s software science metrics
A program P is a collection of tokens, classified as either
operators or operands.
n1 = Number of unique operators
n2 = Number of unique operands
N1 = Total occurrence of operators
N2 = Total occurrence of operands
Length of P = N1 + N2 Vocabulary of P = n1 +n 2
140
Key Software Metrics
Complexity Metrics (cont.)
• Halstead’s software science metrics (cont.)
Theory: Estimate of N is = n1 log n1 + n2 log n2
Theory: Effort required to generate P is
Theory: Time required to program P is T = E/18 Seconds
N^
E =
n1 N2 N log n
2 N2
141
Key Software Metrics
Complexity Metrics (cont.)
Computing McCabe’s cyclomatic number
142
Key Software Metrics
Resource Estimation Models
• COCOMO
Most resource estimation models assume the form:
Simple COCOMO model
(Thousands of Delivered Source Instructions).
143
Key Software Metrics
Resource Estimation Models (cont)
COCOMO (cont.)
• The COCOMO model comes in three forms: simple, intermediate and
detailed. The simple model is intended to give only an order of magnitude
estimation at an early stage. However, the intermediate and detailed
versions differ only in that they have an additional parameter which is a
multiplicative "cost driver" determined by several system attributes.
• To use the model you have to decide what type of system you are building:
– Organic: refers to stand-alone in-house DP systems
– Embedded: refers to real-time systems or systems which are constrained in some
way so as to complicate their development
– Semi-detached: refers to systems which are "between organic and embedded"
(Thousands of Delivered Source Instructions).
144
Key Software Metrics
Resource Estimation Models (cont)
COCOMO (cont.)
• The COCOMO type approach to resource estimation has two
major drawbacks, both concerned with its key size factor
KDSI:
• KDSI is not known at the time when estimations are sought,
and so it also must be predicted. This means that we are
replacing one difficult prediction problem (resource
estimation) with another which may equally as difficult (size
estimation)
• KDSI is a measure of length, not size (it takes no account of
functionality or complexity)
(Thousands of Delivered Source Instructions).
145
Key Software Metrics
Resource Estimation Models (cont.)
Albrecht's Function Points
• Albrecht's Function Points [Albrecht 1979] (FPs) is a popular
product size metric (used extensively in the USA and Europe) that
attempts to resolve these problems. FPs are supposed to reflect the
user's view of a system's functionality. The major benefit of FPs
over the length and complexity metrics discussed above is that they
are not restricted to code. In fact they are normally computed from a
detailed system specification, using the equation
FP=UFC ´ TCF
(Thousands of Delivered Source Instructions).
146
Key Software Metrics
Resource Estimation Models (cont.)
Albrecht's Function Points (cont.)
FP=UFC ´ TCF
• where UFC is the Unadjusted (or Raw) Function Count, and TCF is a
Technical Complexity Factor which lies between 0.65 and 1.35. The UFC is
obtained by summing weighted counts of the number of inputs, outputs,
logical master files, interface files and queries visible to the system user,
where:
– an input is a user or control data element entering an application;
– an output is a user or control data element leaving an application;
– a logical master file is a logical data store acted on by the application user;
– an interface file is a file or input/output data that is used by another application;
– a query is an input-output combination (i.e. an input that results in an immediate
output).
(Thousands of Delivered Source Instructions).
147
Key Software Metrics
Resource Estimation Models (cont.)
Albrecht's Function Points (cont.)
• Function points are used extensively as a size metric in preference to LOC. Thus,
for example, they are used to replace LOC in the equations for productivity and
defect density. There are some obvious benefits: FPs are language independent
and they can be computed early in a project. FPs are also being used increasingly
in new software development contracts.
• One of the original motivations for FPs was as the size parameter for effort
prediction. Using FPs avoids the key problem identified above for COCOMO: we
do not have to predict FPs; they are derived directly from the specification which
is normally the document on which we wish to base our resource estimates.
• The major criticism of FPs is that they are unnecessarily complex. Indeed
empirical studies have suggested that the TCF adds very little in practical terms.
For example, effort prediction using the unadjusted function count is often no
worse than when the TCF is added [Jeffery et al 1993]. FPs are also difficult to
compute and contain a large degree of subjectivity. There is also doubt they do
actually measure functionality.
(Thousands of Delivered Source Instructions).
148
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics
• No serious attempt to use measurement for software QA would be complete
without rigorous means of recording the various problems that arise during
development, testing, and operation.
• It is important for developers to measure those aspects of software quality that can
be useful for determining
– how many problems have been found with a product
– how effective are the prevention, detection and removal processes
– when the product is ready for release to the next development stage or to the customer
– how the current version of a product compares in quality with previous or competing
versions
• The terminology used to support this investigation and analysis must be precise,
allowing us to understand the causes as well as the effects of quality assessment
and improvement efforts..
(Thousands of Delivered Source Instructions).
149
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics (cont.)
(Thousands of Delivered Source Instructions).
One of the problems with problems is that the terminology is not uniform. If an organization
measures its software quality in terms of faults per thousand lines of code, it may be impossible to
compare the result with the competition if the meaning of "fault" is not the same. The software
engineering literature is rife with differing meanings for the same terms. Below are just a few
examples of how researchers and practitioners differ in their usage of terminology.
Until terminology is the same, it is important to define terms clearly, so that they are understood by
all who must supply, collect, analyze and use the data. Often, differences of meaning are acceptable,
as long as the data can be translated from one framework to another.
150
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics (cont.)
• We describe the observations of development, testing, system operation and
maintenance problems in terms of incidents, faults and changes. Whenever a
problem is observed, we want to record its key elements, so that we can then
investigate causes and cures. In particular, we want to know the following:
1. Location: Where did the problem occur?
2. Timing: When did it occur?
3. Mode: What was observed?
4. Effect: Which consequences resulted?
5. Mechanism: How did it occur?
6. Cause: Why did it occur?
7. Severity: How much was the user affected?
8. Cost: How much did it cost?
(Thousands of Delivered Source Instructions).
151
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics (cont.)
Incidents
• An incident report focuses on the external problems of the system: the installation, the chain of events
leading up to the incident, the effect on the user or other systems, and the cost to the user as well as the
developer. Thus, a typical incident report addresses each of the eight attributes in the following way.
• Incident Report
• Location: such as installation where incident observed - usually a code that uniquely identifies the
installation and platform on which the incident was observed.
• Timing: CPU time, clock time or some temporal measure. Timing has two, equally important aspects: real
time of occurrence (measured on an interval scale), and execution time up to occurrence of incident
(measured on a ratio scale).
• Mode: type of error message or indication of incident (see below)
• Effect: description of incident, such as "operating system crash," "services degraded,"
(Thousands of Delivered Source Instructions).
152
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics (cont.)
Incidents (cont.)
• Mechanism: chain of events, including keyboard commands and state data, leading to incident. This
application-dependent classification details the causal sequence leading from the activation of the source to
the symptoms eventually observed. Unraveling the chain of events is part of diagnosis, so often this
category is not completed at the time the incident is observed.
• Cause: reference to possible fault(s) leading to incident. Cause is part of the diagnosis (and as such is more
important for the fault form associated with the incident). Cause involves two aspects: the type of trigger
and the type of source (that is, the fault that caused the problem). The trigger can be one of several things,
such as physical hardware failure; operating conditions; malicious action; user error; erroneous report while
the actual source can be faults such as these: physical hardware fault; unintentional design fault; intentional
design fault; usability problem.
• Severity: how serious the incident’s effect was for the service required from the system. Reference to a
well-defined scale, such as "critical," "major," "minor". Severity may also be measured in terms of cost to
the user.
• Cost: Cost to fix plus cost of lost potential business. This information may be part of diagnosis and
therefore supplied after the incident occurs.
(Thousands of Delivered Source Instructions).
153
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics (cont.)
Faults
• An incident reflects the user’s view of the system, but a fault is seen only by the developer. Thus, a fault
report is organized much like an incident report but has very different answers to the same questions. It
focuses on the internals of the system, looking at the particular module where the fault occurred and the cost
to locate and fix it. A typical fault report interprets the eight attributes in the following way:
Fault Report
• Location: within-system identifier, such as module or document name. The IEEE Standard Classification
for Software Anomalies, [IEEE 1992], provides a high-level classification that can be used to report on
location.
• Timing: phases of development during which fault was created, detected and corrected. Clearly, this part of
the fault report will need revision as a causal analysis is performed. It is also useful to record the time taken
to detect and correct the fault, so that product maintainability can be assessed.
• Mode: type of error message reported, or activity which revealed fault (such as review). The Mode
classifies what is observed during diagnosis or inspection. The IEEE standard on software anomalies, [IEEE
1992], provides a useful and extensive classification that we can use for reporting the mode.
(Thousands of Delivered Source Instructions).
154
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics (cont.)
Faults (cont.)
• Effect: failure caused by the fault. If separate failure or incident reports are maintained, then this entry
should contain a cross-reference to the appropriate failure or incident reports.
• Mechanism: how source was created, detected, corrected. Creation explains the type of activity that was
being carried out when the fault was created (for example, specification, coding, design, maintenance).
Detection classifies the means by which the fault was found (for example, inspection, unit testing, system
testing, integration testing), and correction refers to the steps taken to remove the fault or prevent the fault
from causing failures.
• Cause: type of human error that led to fault. Although difficult to determine in practice, the cause may be
described using a classification suggested by Collofello and Balcom: a) communication: imperfect transfer
of information; b) conceptual: misunderstanding; or c) clerical: typographical or editing errors
• Severity: refer to severity of resulting or potential failure. That is, severity examines whether the fault can
actually be evidenced as a failure, and the degree to which that failure would affect the user
• Cost: time or effort to locate and correct; can include analysis of cost had fault been identified during an
earlier activity
(Thousands of Delivered Source Instructions).
155
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics (cont.)
Changes
• Once a failure is experienced and its cause determined, the problem is fixed through
one or more changes. These changes may include modifications to any or all of the
development products, including the specification, design, code, test plans, test data
and documentation. Change reports are used to record the changes and track the
products most affected by them. For this reason, change reports are very useful for
evaluating the most fault-prone modules, as well as other development products
with unusual numbers of defects. A typical change report may look like this:
Change Report
• Location: identifier of document or module affected by a given change.
• Timing: when change was made
• Mode: type of change
(Thousands of Delivered Source Instructions).
156
Key Software Metrics
Resource Estimation Models (cont.)
Incident Count Metrics (cont.)
Changes (cont.)
– Effect: success of change, as evidenced by regression or other testing
– Mechanism: how and by whom change was performed
– Cause: corrective, adaptive, preventive or perfective
– Severity: impact on rest of system, sometimes as indicated by an ordinal
scale
– Cost: time and effort for change implementation and test
(Thousands of Delivered Source Instructions).
157
Software Quality Assurance
Framework of
Quality Assurance Attributes for
Process Improvement
158
Software Quality Assurance
• Business is caught up in a massive restructuring not seen since the
second industrial revolution, which introduced the factory system and
dramatically changed all aspects of our lives. Resulting in a paradigm
shift that requires a new context for leadership and management
practice in all sectors of our economy.
• We cannot stop it. We cannot avoid it. We can, however, tap into the
power generated by the change forces at work here and use it to
transform our company in ways that will take full advantage of the
capabilities inherent in an information age economy.
• The term information age is no more restricted to computers and data
than the term industrial age is limited to machines and materials. The
information age concept is all-encompassing and affects all facets of
our culture, society, and economy.
159
Software Quality Assurance
• Process Improvement science/engineering includes the methodologies and best
practices proven to encourage and help organizations to change the way they
not only produce products but also the way they manage the people who are
producing those products.
Elements of Process Improvement Methodology
• A Framework for Managing Process Improvement (Framework) is the
response to the need for an overarching methodology. The Framework consists
of a comprehensive methodology for performing process improvement
projects and is applicable in all areas of the software industry.
– Continuous Process Improvement (CPI) that reduces variation in the quality of
output products and services and incrementally improves the flow of work within a
functional activity.
– Business Process Redesign (BPR) that removes non-value added activities from
processes, improves our cycle-time response capability, and lowers process costs.
– Business Process Reengineering (BPR) that radically transforms processes through
the application of enabling technology to gain dramatic improvements in process
efficiency, effectiveness, productivity, and quality.
160
Continuous Process Improvement (CPI).
• Continuous process improvement is most closely associated with the
Total Quality Management (TQM) discipline. The traditional approach
is to empower self-managed teams to make task-level improvements in
quality, cycle time, and cost. Improvements are incremental and
sustained. They are creative responses to the constant need to get the
job done in changing circumstances. CPI actions typically are wholly
contained within one functional activity, although cross-functional
teams can be organized to deal with chronic or pervasive situations.
• To use an analogy, the objective of a CPI team is to tend to one or two
trees in the forest.
161
Business process redesign (BPR).
• Business process redesign is the next level of improvement. BPR actions
are undertaken in a project context with planned or specific improvement
objectives. The focus is on streamlining processes by detecting and
eliminating non-value added process time and costs, and incorporating
best practices in whole or in part. Maximize improvement in quality with
respect to output products and services is the objectives of BPR.
• Cross-functional teams work together to model and analyze processes
(AS-IS condition), and design a TO-BE condition that maximizes quality
and production capabilities. There is a strong reliance on capturing the
experience base of project participants as the basis for generating
improvement ideas. While these techniques often produce significant
improvement ideas, the ideas are limited because of the insider perspective
of members of the improvement team.
• To continue the analogy, the forest is managed in spite of all the trees.
162
Business process reengineering (BPR).
• Process reengineering is often undertaken in response to dramatic changes in
the external environment (a paradigm shift, for instance) that apply
considerable pressure on the ability of the organization to fulfill its mission,
improve its competitive positioning, or to even survive as an entity. Virtually
all functions within the organization are affected by BPR actions. The existing
organizational and technological infrastructures are subject to major
dislocations, and pressure is applied to the very culture of the organization.
• BPR actions are initiated and guided by senior leadership.
• New performance measures must be developed to reflect these changes.
Assuming that the objective of BPR is to produce quantum leaps in quality,
cycle time, and cost efficiency; baseline measurements and data will be of
little use (and may inhibit) the search for new, more meaningful measures.
• To complete the analogy, the objective of the BPR team is to create a new
forest with sturdier and more valuable trees.
163
Three organizational elements involved in process improvement.
• At all three levels are process, people, and technology.
– At the CPI level, people and how they perform their jobs is the focus.
– At the BPR level, process is the focus of improvement efforts.
– At the BPR level, technology assumes primary importance.
But all three components are always part of every quality improvement effort.
The differences are only of degree, importance, and priority.
• We can also make the case that each level of improvement emphasizes
different categories of measures.
– CPI deals, to great extent, with quality measures;
– BPR with cost measures.
164
Software Quality Assurance
Process Management Principles
• The concept of process management is not new. It has been practiced on the
factory floor for years. Process management became possible in manufacturing
as machines began replacing manual labor, and information and
communication networks became available to control the machines. Work
flow was simplified, product quality was increased, cycle time was reduced,
and process costs had nowhere to go but down. Today, there are entire
factories with processes so well designed that no humans are needed except for
emergencies, maintenance, and enhancements. And it is possible to buy a
small high-quality color television for the price of a meal for two at a better
restaurant.
• It is interesting to note that while the typical service corporation today is
patterned after the old-style factory with its manually-intensive, functionally-
based division of labor concept, the new-age service corporation is being
modeled after the modern industrial corporation and its automated end-to-end
processes.
165
Software Quality Assurance
Functions and Processes
• A process is simply the largest unit
referring to the flow of work through an
enterprise beginning with external
suppliers and ending with external
customers. Along the way, value is
added at each step through a series of
transformations involving the
consumption of resources within an
established control (rule-based)
framework. Processes may be
decomposed into smaller units
beginning with sub processes and
continuing to activities, tasks, and
operations. The principles are the same,
regardless of the level of work
transformation observed.
166
Software Quality Assurance
Process Stakeholders
• Customers - Customers are the
recipients and users of the products and
services produced by the process..
• Suppliers - Suppliers provide the
inputs to a process, which consist of
materials and data..
• Senior Management - Senior
Management is defined as those entities
outside the process that set rules,
requirements, standards, constraints,
and budgets on process performance.
• Resource Providers - Resource
providers fund process operations with
facilities, equipment, machines, and
labor. Everything consumed in whole or
in part is considered a process resource.
167
Software Quality Assurance
Process Stakeholders
• Customers - Customers are the
recipients and users of the products and
services produced by the process..
• Suppliers - Suppliers provide the
inputs to a process, which consist of
materials and data..
• Senior Management - Senior
Management is defined as those entities
outside the process that set rules,
requirements, standards, constraints,
and budgets on process performance.
• Resource Providers - Resource
providers fund process operations with
facilities, equipment, machines, and
labor. Everything consumed in whole or
in part is considered a process resource.
168
Software Quality Assurance
Barriers to Process Improvement .
• Process improvement actions and programs face obstacles that must be
identified, understood, and overcome. In general, the barriers fall into
one of three categories:
– Organizational
– Cultural
– Regulatory.
169
Software Quality Assurance
Organizational Barriers
• These barriers are related to the hierarchical structure of the enterprise
in which employees focus more on serving management than on
providing quality and service to the Customer. Also included are the
barriers inherent in functional rather than process management.
• Senior leadership commitment and buy-in: Business process
reengineering is a top-down initiative and depends upon strong
leadership to meet and overcome obstacles to success.
• Mismatch between authority and responsibilities: Process
improvement includes the concepts of team-based performance and
worker empowerment, both of which challenge traditional
management and organizational wisdom.
170
Software Quality Assurance
Organizational Barriers (cont.)
• Functional and technical stovepipes: The concept of process crosses
established organizational boundaries and requires new methods of
managing work products.
• Funding: Current funding practices support the status quo and inhibit
management's ability to apply scarce resources to activities that
produce products and services most needed by internal and external
customers.
171
Software Quality Assurance
Cultural Barriers
• These barriers are related to work practices developed over time that
militate against decentralized decision making and worker
empowerment - both requisites for high performance in an information
age economy.
– Becoming customer-centered: Managers and employees must shift from a
rule-based to a customer-based mode of operation and establish
performance measures that focus on process outcomes rather than process
inputs such as budget.
– Aversion to job elimination, risk, and change: The very essence of process
improvement is the elimination of non-value added activities, radical
change, and adoption of high-technology solutions - all of which
challenge the organizational status quo.
172
Software Quality Assurance
Regulatory Barriers
• These barriers prevent the redesign of workflow
commensurate with process management, effective
utilization of work teams, and innovative changes in the
recognition and reward systems that promote effective
teamwork.
– Focus on current operations: Process improvement disrupts
established procedure and challenges existing regulations and
directives. Management must successfully guide employees
through the transition period from existing process to
reengineered process.
173
Software Quality Assurance
Regulatory Barriers (cont.)
– Inconsistent rules, methods, and techniques underlying
management processes: Much of the non-value added activities
associated with processes undergoing improvement can be
traced back to obsolete or inappropriate rules, regulations,
methods, and techniques whose worth must be reevaluated.
– Policies on job descriptions, training, and reassignment: Process
improvement calls for a radical reengineering of personnel
policies that currently limit the authority of management to
develop a flexible, customer-centered work force with
appropriate rewards and recognition for team-centered
performance and individual skill development.
174
Software Quality Assurance
Approach to Quality Process Improvement
• The methodology for conducting process improvement projects is
described in detail beginning in Section 4 of this guidebook. This is the
general approach for conducting a process improvement effort:
1. Understand the business and functional requirements for the process under
study with respect to mission.
2. Assess the current status of all process elements (process, people and
technology) with respect to meeting requirements and enabling change.
3. Establish the baseline (AS-IS models) with respect to process, data,
organization, and technology.
4. Identify and quantify stakeholder interests in the process, establish new
standards of performance, and design measures and key indicators.
5. Conduct an improvement analysis program to identify potential
improvement initiatives that will raise process performance to the desired
level.
175
Software Quality Assurance
Approach to Quality Process Improvement (cont.)
6. Design a change management program that will address organizational and
technical issues to align improvements in these elements with potential or
planned process improvement.
7. Develop a process vision and construct TO-BE models of process, data,
organization, and technology based on the vision and improvement
initiatives. Identify alternative means of achieving the desired future state.
8. Perform economic and risk analysis on all alternatives, and select and
document a recommended course of action.
9. Perform enterprise engineering to construct an organizational and
technological platform suitable for the improved, redesigned, or
reengineered process.
10. Test (prototype or pilot), implement, deploy, operate, and maintain the
improved process and supporting systems.
11. Evaluate progress, update baseline models, and prepare for the next cycle
of improvements.
176
Software Quality Assurance
Business Engineering Management Process
177
Software Quality Assurance
Quality Improvement Attributes
• Quality process improvement (especially process reengineering) is a
complex undertaking that demands leadership from the highest levels
company and participation from virtually all executives, managers, and
professional employees.
1. Total Quality Management (TQM) principles and practices ensure high-
quality products and services to both internal and external customers of
business processes.
2. Industrial engineering provides process measures, controls on process
efficiency and effectiveness, and standardized procedures.
3. Workflow design that incorporates the concurrent management of
technological and human change.
4. Process redesign incorporates innovations and eliminates non-value added
time and costs from processes.
178
Software Quality Assurance
Quality Improvement Attributes (cont.)
5. The introduction of competitive (aggressive) information technology
enables superlative customer quality and service.
6. The concept of an end-to-end process improvement model with a
supporting methodology seems to be the most expeditious way to
implement Davenport's principles. Such a methodology would:
7. Begin with a statement of mission, vision, objectives, goals, and
strategies
8. Produce a reengineered business process to support the stated
organizational mission and related strategic and business plans
179
Software Quality Assurance
Quality Improvement Attributes (cont.)
5. Continue through information systems design, deployment, and
operations consistent with the Enterprise Model
6. Employ process management principles to ensure that process
improvement gains are held
7. Focus on cultural and organizational change management issues and
structural barriers to change that represent the most risk-prone
component of process improvement efforts.
8. The Framework for Managing Process Improvement (Framework or
F/MPI) is designed to enable process improvement efforts within the
Department of Defense consistent with the established body of
expertise for process improvement and best practices analysis.
180
Software Quality Assurance
Change Management
• Quality improvement inevitable requires change to an organization's
structure and culture.
– All change is disruptive. Therefore, business process improvement is
disruptive to an organization's structure and culture.
– Enterprises that have attempted process improvement while ignoring this
syllogism have invariably failed.
– This means that organizational change management is the most critical
responsibility in an overall program of process improvement.
• Organizational change management begins during the planning phase
and extends through the project execution phase. Thus, dealing with
organizational change is a continuous responsibility. This
responsibility may be vested in one member of the improvement team,
or it may be the responsibility of a separate team chartered to support
all process improvement teams. The former works when only one
process improvement effort is under way across a group of functional
181
Software Quality Assurance
Change Management (cont.)
• Organizational change management begins during the planning phase
and extends through the project execution phase.
– organizational change is a continuous responsibility.
– responsibility is vested in the SQA team
• SQA can serve as the focus or hub for multiple process improvement
efforts. This arrangement allows:
– The change management team to take a broader view of the relationship of
process to organization
– Assurance that organizational change management efforts are coordinated
for all improvement efforts.
– Technology change management, discussed in Section 7 of this guide, is
also best dealt with by an independent change management team working
with separate process improvement teams.
182
Software Quality Assurance
Change Management (cont.)
– Technology change management, is also best dealt with by an
independent change management team working with separate
process improvement teams.
• The following figure also shows that change management
must consider the external environment, external
stakeholders, and the technical infrastructure when
attempting to shape the organization's structure and culture
around improved processes.
183
Software Quality Assurance
Software Quality Assurance
Change Management Team
Software Quality Assurance
Change Management Team
Change Management Schematic
184
Software Quality Assurance
The change management team must accomplish four general objectives:
1. Understand the organizational changes that are needed as a consequence
of process redesign or reengineering
2. Design the necessary structural changes needed to support the new
process
3. Design a program that will begin the cultural transformation of the
organization to one that is aligned with the principles behind process
improvement
4. Anticipate, recognize, and resolve the barriers to change that will spring
up in reaction to the change management plan.
185
Software Quality Assurance
• Solid change demands careful planning with inputs from all
levels of the organization. Carefully timed action steps support
change implementation. This requires answers to the who,
what, when, where, and why.
– Is the organization's leadership prepared to lead the change effort?
– Have we primed the organization for planned changes?
– Is the organizational structure ready to support change?
– Do team members have the right mix of skills to make the changes happen?
– Is success a reasonable expectation?
– Are functional managers properly committed to the planned change?
– Are sufficient resources available to support the change effort?
186
Software Quality Assurance
Barriers to change management success are well known.
• They generally include the following and all must be successfully contained
or resolved or the success of the process improvement cannot be reasonably
assured:
– Lack of active, visible, and committed leadership
– Ignorance of improvement goals and employees' roles in achieving
them
– Ingrained satisfaction with the status quo
– Vested interests in keeping things as they are
– Inadequate training and preparation for the change effort
187
Software Quality Assurance
Barriers to change management success are well known (cont.)
– Hostility directed at members of the improvement or change teams
– No or little reward or recognition for participating in the change effort
– Threats to personal security and career objectives
– Lack of organizational support
– Inadequate, infrequent, or deceiving communications about the changes
188
Software Quality Assurance
Course Summary
• Software Quality Assurance is an engineering science that is
responsible for ensuring that all software disciplines perform
there tasks with the concept of providing a product to the
customer that will meet the customer’s idea of quality.
• Although SQA is lead by the SQA team, the responsibility for
building in quality product attributes is with all members of the
product team.

SQA_Class

  • 1.
    Software Quality Assurance RobertL. Straitt University of Corsica Corte, Corsica
  • 2.
    2 Software Quality Assurance Dr.Robert L. Straitt • This course is presented to introduce students to the Software Quality Assurance principals as practiced in industry today. • The Course is presented in English because this has become the international language of software development and the students mastering of technical English can greatly increase the marketability to industry. • This presentation in whole or part can be used and referred to by students during the end of course exam.
  • 3.
    3 Software Quality Assurance Dr.Robert L. Straitt Notice for Students • Students should pay careful attention to the sections of this presentation that are underlined, or underlined and bolded as these the most important point for the lectures and are the source material for questions on the exam.
  • 4.
    4 Software Quality Assurance •Overview of Software Quality Assurance • Quality Attribute Design Primitives • Defect Prevention • Software Metrics • Process Improvement
  • 5.
    5 Software Quality Assurance Overviewof Software Quality Assurance
  • 6.
    6 • Software QualityAssurance (SQA) is a group of related activities employed throughout the software life cycle to positively influence and quantify the quality of the delivered software. SQA is not exclusively associated with any major software development activity, but spans the entire software life cycle. • It consists of both process and product assurance. Its methods include assessment activities such as ISO 9000 and CBA IPI (CMM-Based Appraisal for Internal Process Improvement), analysis functions such as reliability prediction and causal analysis, and direct application of defect detection methods such as formal inspection and testing. Introduction
  • 7.
    7 SOFTWARE QUALITY FACTOR DEFINITION CANDIDATEMETRIC CorrectnessExtent towhichthesoftware conformstospecifications andstandards Defects LOC Efficiency Relativeextent towhicha resourceisutilized(i.e., storage, space, processing time, communicationtime) Actual sourseUtilization Allocated sourseUtilization Re Re Expandability Relativeeffort toincrease softwarecapabilityor performancebyenhancing current functionsor by addingnewfunctionsor data EffortToExpand EffortToDevelop Flexibility Easeof effort for changing softwaremissions, functions, or datatosatisfy otherrequirements   005. AvgLaborDaysToChange Integrity Extent towhichthesoftware will performwithout failure duetounauthorizedaccess tothecodeor data Defects LOC Interoperability Relativeeffort tocouplethe softwareof onesystemto thesoftwareof another EffortToCouple EffortToDevelop Maintainability Easeof effort for locating andfixingasoftwarefailure withinaspecifiedtime period   01. AvgLaborDaysToFix Portability Relativeeffort totransport thesoftwarefor usein EffortToTransport EffortToDevelop duetounauthorizedaccess tothecodeordata Interoperability Relativeefforttocouplethe softwareofonesystemto thesoftwareofanother EffortToCouple EffortToDevelop Maintainability Easeofeffortforlocating andfixingasoftwarefailure withinaspecifiedtime period   01. AvgLaborDaysToFix Portability Relativeefforttotransport thesoftwareforusein anotherenvironment (hardwareconfiguration, and/orsoftwaresystem environment) EffortToTransport EffortToDevelop Reliability Extenttowhichthesoftware willperformwithoutany failureswithinaspecified timeperiod Defects LOC Reusability Relativeefforttoconverta softwarecomponentforuse inanotherapplication EffortToConvert EffortToDevelop Survivability Extenttowhichthesoftware willperformandsupport critical functionswithout failurewithinaspecified timeperiodwhenaportion ofthesystemisinoperable Defects LOC Usability Relativeeffortforusing software(trainingand operation,e.g., familiarization,input preparation,execution, outputinterpretation) LaborDaysToUse LaborYearsToDevelop Verifiability Relativeefforttoverifythe specifiedsoftwareoperation andperformance EffortToVerify EffortToDevelop
  • 8.
    8 Process & ProductAssurance • Software Quality Assurance is the activities of verifying the ability of the software process to produce a software product that is fit for use. • SQA provides senior management with feedback on process compliance. • It provides Senior management with an early warning of quality deficiencies and recommendations to address the situation. • Software quality assurance personnel MUST have a reporting path which is independent of the management responsible for the activities being evaluated for quality to prevent conflicts with project management goals of product delivery encourages adherence to the official process. • The methods typically used to accomplish process assurance include SQA audit and reporting, assessment and statistical process control analysis.
  • 9.
    9 Process & ProductAssurance • Assurance that the final product performs as specified is the role of quality assurance. This includes in process methods, as well as some methods that involve independent oversight. • The purpose of embedded SQA processes include activities that are part of the development life cycle which will “build-in” the desired product quality. This focus allows identification and elimination of defects as early in the life cycle as possible, thus reducing maintenance and test costs. Embedded SQA methods include formal inspection, reviews, and testing. • Independent oversight functions such as independent test function, or testing which is witnessed by an independent entity such as SQA, are other methods of providing product assurance. Other methods include tests witnessed by customers, expert review of test results, or audits of the product.
  • 10.
    10 Process & ProductAssurance • Remember that for each unique product there is only one unique process that will produce it. Process = Product A = B Design Requirement + Process = Product + Required Characteristics X+A = B+X
  • 11.
    11 Methods and Technologies •Many methods are used to perform the quality assurance functions. Audits are used to examine the conformance of a development process to procedures and of products to standards. Embedded SQA activities, which have the purpose of detecting and removing errors, take a variety of forms, including inspection and testing. Assessment is another method of process assurance. Analysis techniques, such as causal analysis, reliability prediction and statistical process control, help ensure both process and product conformance.
  • 12.
    12 Methods and Technologies EmbeddedSQA Error Detection Methods • Audits • Formal Inspection • Reviews • Walkthroughs • Testing • Assessment • Causal Analysis/Defect Prevention • Reliability Prediction • Statistical Process Control
  • 13.
    13 Methods and Technologies Audits •Audits are embedded into the software life cycle, as well as being performed by independent SQA outside of the lifecycle. • An external audit focuses on the adherence to established standards and procedures. Evaluation of the effectiveness of the process is part of an SQA audit. This type of audit examines records according to a sampling process to determine if procedures are being followed correctly. • In general, an embedded audit examines work products to determine if the products conform to standards and if project status is accurate. Manual or automated methods can be used.
  • 14.
    14 Methods and Technologies FormalInspection • Formal inspection is an examination of a software product at different stages of the development process it can utilize checklists, expert inspectors, and a trained inspection moderator. The objective is to identify defects in the product and the process deficiency that allowed for them to exist • Certain projects which have an effectively performing inspection process report better than 80% defect detection rates.[1] • [1] Tom Gilb and Dorothy Graham, Software Inspection (Wokingham, England: Addison-Wesley Publishing, 1993), pp. 18-19.
  • 15.
    15 Methods and Technologies Reviews •Reviews are performed in between formal inspections as an SQA method. Informal design and code review methods are generally done at the discretion of the product manager, but should always follow a defined process and should be summarized and reported at the project level. • The term “review” is also used to refer to project meetings (e.g., a product design review) which emphasize resolving issues and which have a primary objective of assessing the current state of the product.[1] • [1] Robert G. Ebenau and Susan H. Strauss, Software Inspection Process (New York: McGraw-Hill, 1994), pp.283-285.
  • 16.
    16 Methods and Technologies Walkthroughs •Walkthroughs are meetings in which the author of the product acts as presenter to proceed through the material in a stepwise manner. • The objective is often raising and/or resolving design or implementation issues. • Walkthroughs can be more informal but should follow a defined walkthrough process and documented.
  • 17.
    17 Methods and Technologies Testing •Testing is a dynamic analysis technique with the objective of error detection. • Testing of software is performed on individual components during intermediate stages of development, subsystems following integration, and entire software systems. • Testing involves execution of the software and evaluation of its behavior in response to a set of input against documented, required behavior.
  • 18.
    18 Methods and Technologies Assessment •Assessment is determining the capability of a process through comparison with a standard. Examples of standard assessment methods which are frequently employed are ISO 9000 and SEI SW- CMM®. • The Software Engineering Institute (SEI) at Carnegie Mellon University was established by the Air Force in 1984 to improve the practice of software engineering. The Software Capability Maturity Model (SW-CMM.®.) is a model for software process assessment.
  • 19.
    19 Methods and Technologies Assessment(cont.) • The ISO 9001 international standard addresses quality requirements across industries. The requirements within the standard are written in a generic manner. The ISO 9000-3 document gives guidance for applying the ISO 9001 standard to software. • Assessments use outside individuals to perform a combination of random auditing and interviewing to answer a list of questions which is tailored to fit the organization being assessed.
  • 20.
    20 Methods and Technologies CausalAnalysis/Defect Prevention • The purpose of these activities is to probe for process weaknesses that allowed defects to be inserted into the product before they occur. Root cause analysis, which may include developers and other analysts, determines the root cause of the defects found. Process improvement initiatives are often generated and passed on to a process management team. • It is important that the elapsed time between defect discovery and causal analysis be minimized.
  • 21.
    21 Methods and Technologies CausalAnalysis/Defect Prevention Diagram
  • 22.
    22 Methods and Technologies CausalAnalysis/Defect Historical Example 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 Structured flows Formal software inspections Formal inspection moderators Formalized configuration control Inspection improvements Configuration Management Data Base (CMDB) Error Counting, Metrics Formal Process Error Cause Analysis Process Analysis Quality Management and Process Enhancement Oversight analysis Build automation IBM process assessment Formalized requirements analysis Quarterly quality reviews Prototyping Inspection improvements Formal requirements inspections Process applied to support software FSW (recon) certification LAN based development environment Enhanced test tools Oracle based CMDB upgrade ISO 9001 certification Project manager process reviews Process improvement database Concurrent engineering process model Formalized milestone risk assessments Support software inspection process automation Re-engineering lessons learned Web-based troubleshooting guide On-line programming standards SEI SPC collabora- tion Reliability modeling Process maturity measurements Formalized training Technical exchange seminars Process evaluation Software development environment Process document assessment Reliability/ complexity research Development test philosophy Inspection process tools Groups_graphics_pubs_PASS_FSW_001.cv5 Structured education planning Formalized lessons learned
  • 23.
    23 Methods and Technologies ReliabilityPrediction • The IEEE Standard Glossary of Software Engineering Terminology defines software reliability as: “The ability of the software to perform its required function under stated conditions for a stated period of time.”[1] • The ability to predict the reliability of a software system enables high quality assurance and helps determine readiness for release. Three techniques are used: – failure record, – behavior for a random sample of input points, – And quantity of actual and “seeded” faults detected during testing. [1] Standards Coordinating Committee of the IEEE Computer Society, IEEE Standard Glossary of Software Engineering Terminology, IEEE-STD-610.12-1990 (New York: IEEE, 1991).
  • 24.
    24 Methods and Technologies StatisticalProcess Control • Statistical process control is the use of statistical methods to assure both software quality. This technique is used to determine if a process is out of defined parameters, thus indicating process defects and/or potential for increased product defects. These methods include: – Pareto analysis, – Shewhart control charts, – histograms, – and scatter diagrams.
  • 25.
    25 Methods and Technologies ShewhartChart Shewhart X-chart with control and warning limits
  • 26.
    26 Methods and Technologies ShewhartChart Shifting patterns: Shifts may result from introduction of new programs and engineers, process changes, new technologies and or software changes. They also could be from a change in SQA methods, standards, or from a process improvement. Trending patterns: Trends are usually due to a gradual patching, data degradation or, requirements instability, lack of process control. They also could be from a change in SQA methods, standards, or from a process improvement.
  • 27.
    27 Case Study: Space ShuttleFlight Software1 • The Space Shuttle Primary Avionics Software System Flight Software (PASS FSW) project, United Space Alliance, produces highly reliable software for NASA’s Space Shuttle onboard computers. It is a prime example of the effective implementation of software quality assurance. It has twice received NASA’s Quality and Excellence Award (the George M. Low Trophy). While with IBM, the project also received the IBM Market Driven Quality Award for Best Software Laboratory. It has also been assessed at SEI-CMM Level 5. • To achieve the level of quality control desired, the PASS FSW project has mainly relied on embedded SQA activities. Because of the NASA customer’s focus on the delivery of error-free software, the FSW management team felt that this could only be achieved by making every individual on the project responsible for the quality of the delivered software. This decision led to embedding within the development process activities designed to identify and remove errors from the software product and to also remove the cause of those errors from the process. 1 STSC: Software Quality Assurance, Revision 0.2, April 6, 2000
  • 28.
    28 SQA Case Study(cont.) • Beginning in the mid-1970s, the project built on a foundation that included competent and controlled software project management techniques and effective configuration management (reference Figure 1. PASS FSW Process Improvement History). The project started a practice that is still performed today: an error oversight analysis process which entailed counting errors found, analyzing those errors, correcting them and implementing process improvements to prevent the recurrence of those errors. When this process began, errors in the released product were examined. Focusing on those errors led to the realization of how expensive it was to remove errors found late in the development life cycle. • Formal design and code inspections were introduced in the late 1970s to enhance the possibility for early discovery of errors, when it was cheapest to correct those errors. When the number of errors in the released software decreased to such low numbers that error trends were not evident, errors found during inspections were also subjected to the same kind of oversight analysis. This led to improvements in the inspection process and later a change in focus of development unit testing to mirror expected customer usage of the software.
  • 29.
    29 SQA Case Study Figure1: PASS FSW Process Improvement History 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 Structured flows Formal software inspections Formal inspection moderators Formalized configuration control Inspection improvements Configuration Management Data Base (CMDB) Error Counting, Metrics Formal Process Error Cause Analysis Process Analysis Quality Management and Process Enhancement Oversight analysis Build automation IBM process assessment Formalized requirements analysis Quarterly quality reviews Prototyping Inspection improvements Formal requirements inspections Process applied to support software FSW (recon) certification LAN based development environment Enhanced test tools Oracle based CMDB upgrade ISO 9001 certification Project manager process reviews Process improvement database Concurrent engineering process model Formalized milestone risk assessments Support software inspection process automation Re-engineering lessons learned W eb-based troubleshooting guide On-line programming standards SEI SPC collabora- tion Reliability modeling Process maturity measurements Formalized training Technical exchange seminars Process evaluation Software development environment Process document assessment Reliability/ complexity research Development test philosophy Inspection process tools Groups_graphics_pubs_PASS_FSW _001.cv5 Structured education planning Formalized lessons learned
  • 30.
    30 SQA Case Study(cont.) • Formalization of the independent verification and validation process through documenting verification techniques was performed in the early 1980s. Inspections of the verifiers’ test procedures were added shortly thereafter, and the NASA customer was invited to participate. Further enhancements in verification included improved software interface analysis and the institution of functional test teams to improve technical education. • Continued error analysis identified requirements as a major source of errors. The next major enhancement was a formalization of the requirements analysis process to provide better requirements from which the developers and testers could work. Formal requirements inspections soon followed in 1986. The developers and verifiers participated with the requirements analysts in the inspections to help produce requirements that were viable for both implementation and testing. The requirements author and NASA customer were also invited to provide insights into requirements intent and from the customer’s viewpoint. Later, these inspections were enhanced to include the requirements author’s description of potential software usage scenarios for major capabilities. All of these measures resulted in a significant decrease in software errors due to requirements problems.
  • 31.
    31 SQA Case Study(cont.) • Even an organizational change driven by the NASA customer’s direction to reduce personnel on the project was done in a way that resulted in further error reduction. The same personnel who performed the requirements analysis at the beginning of the software life cycle were given the responsibility for completing the system performance verification at the end of the development cycle. Their involvement in both the beginning and the end of the life cycle provided additional insights for error detection. • As each process change had visible effects in reducing the errors in the delivered product, continued process improvement was seen to be the vehicle through which the goal of error-free software could someday be realized. Process teams made up of participants in the processes were created to own the processes and proactively look for process improvements. These teams still function and are the major source for process improvement ideas.
  • 32.
    32 SQA Case Study(cont.) • Independent SQA oversight has also played a role in the success of the PASS FSW project. In the late 1980s, NASA customer requirements instituted an SQA role that was detached from project management. This organization has primarily used audit methods to ensure process compliance. • Internal and external assessments have also positively influenced the project’s software quality management, as well as confirmed the value of the embedded SQA. These have included a 1984 IBM assessment by a team working under Watts Humphrey; the assessment tool and methodology used later evolved in the SEI-CMM. A similar assessment was done in 1988 by a team from NASA, JPL and SEI affiliates; the project was assessed at the highest maturity level using the SEI-CMM criteria. In the 1990s, the project received ISO 9001 certification. The project was also evaluated according to criteria of NASA’s Quality and Excellence Award (the George M. Low Trophy), and the IBM Market Driven Quality Award, which is based on the Malcolm Baldrige criteria. All of these appraisals have aided PASS FSW in identifying areas for improvement and obtaining an industry perspective.
  • 33.
    33 SQA Case Study(cont.) • The results of the focus on process improvement is demonstrated in Figure 2. PASS FSW Product Error Rate, which shows the decrease in released errors since 1984. The error rate goes from 1.8 per thousand lines of changed source code in 1985 to zero in the latest release. The software currently in use to support shuttle missions has no errors attributable to the latest software release. • Process improvements continue to be made. Improvements in support tool developments such as tools which aid in analysis of software interfaces, and tools to provide improved testing capabilities, also contribute to significant reductions in error rates. Embedded SQA activities in all phases of the development life cycle, management support of quality initiatives, employees empowered to own and improve processes, and error cause analysis have all enabled the PASS FSW project to meet its customer requirements for delivery of error-free software.
  • 34.
    34 SQA Case Study Figure2: PASS FSW Product Error Rate. PASS FSW PRODUCT ERROR RATE Based on new/changed SLOCs 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 OI05 OI06 OI07 OI7C OI8A OI8B OI8C OI8D/F OI20 OI21 OI22 OI23 OI24 OI25 OI26 OI26B OI27 ProductErrors/KSLOC
  • 35.
    35 Quality Attribute DesignPrimitives Architecture Tradeoff Analysis Initiative
  • 36.
    36 Quality Attribute Design Introduction Overthe last 30 years, a number of researchers and practitioners have examined how systems achieve software quality attributes. Boehm and the International Organization for Standardization (ISO) introduced taxonomies of quality attributes [Boehm 78, ISO 91]. Bass, Bergey, Klein, and others have analyzed the relationship between software architecture and quality attributes [Bass 98, Bergey 01, Klein 99a, Klein 99b]. Various attribute communities have mathematically explored the meaning of their particular attributes [Kleinrock 76, Lyu 96]. The patterns community has codified recurring and useful patterns in practice [Gamma 95]. However, no one has systematically and completely documented the relationship between software architecture and quality attributes.
  • 37.
    37 Quality Attribute Design Benefits Systematicallycodifying the relationship between architecture and quality attributes has several obvious benefits: • A greatly enhanced architectural design process – for both generation and analysis. • During design generation, an architect could reuse existing analyses and determine tradeoffs explicitly rather than on an ad hoc basis. Experienced architects do this intuitively but even they would benefit from codified experience. For example during analysis, architects could recognize a codified structure and know its impact on quality attributes.
  • 38.
    38 Quality Attribute Design Benefits(cont.) • a method for manually or dynamically reconfiguring architectures to provide specified levels of a quality attribute. Understanding the impact of quality attributes on architectural mechanisms will enable architects to replace one set of mechanisms for another when necessary. • The potential for third-party certification of components and component frameworks. Once the relationship between architecture and quality attributes is codified, it is possible to construct a testing protocol that will enable third party certification.
  • 39.
    39 Quality Attribute Design Causefor Lack of Documentation The lack of documentation stems from several causes: • The lack of a precise definition for many quality attributes inhibits the exploration process. Attributes such as reliability, availability, and performance have been studied for years and have generally accepted definitions. Other attributes, such as modifiability, security, and usability, do not have generally accepted definitions. • Attributes are not discrete or isolated. For example, availability is an attribute in its own right. However, it is also a subset of security (because denial of service attack could limit availability) and usability (because users require maximum uptime). Is portability a subset of modifiability or an attribute in its own right? Both relationships exist in quality attribute taxonomies.
  • 40.
    40 Quality Attribute Design Causefor Lack of Documentation (cont.) • Attribute analysis does not lend itself to standardization. There are hundreds of patterns at different levels of granularity, with different relationships among them. As a result, it is difficult to decide which situations or patterns to analyze for what quality, much less to categorize and store that information for reuse. • Analysis techniques are specific to a particular attribute. Therefore, it is difficult to understand how the various attribute-specific analyses interact.
  • 41.
    41 Quality Attribute Design Background Awidely held premise of the software architecture community is that architecture determines quality attributes. This raises several questions: • Is this premise true and why do we believe that quality attribute behavior and architecture are so intrinsically related? • Can we pinpoint key architectural decisions that affect specific quality attributes? • Can we understand how architectural decisions serve as focal points for tradeoffs between several quality attributes?
  • 42.
    42 Quality Attribute Design Relationshipsof Architecture to Quality In essence, software architectural design, like any other design process, consists of making decisions. These decisions involve: • Separating one set of responsibilities from another • Duplicating responsibilities • Allocating responsibilities to processors • Determining how discrete elements of responsibilities cooperate and coordinate
  • 43.
    43 Quality Attribute Design Relationshipsof Architecture to Quality (cont.) By examining these decisions and their results, we can clarify the relationship between quality attributes and software architecture. and draw two conclusions: 1. At this level of design, much can be said about many quality attributes. 2. Very little, if anything, must be known about functionality in order to draw quality attribute conclusions.
  • 44.
    44 Quality Attribute Design QualityAttributes and General Scenarios Attribute communities have a natural tendency to expand. For example, we do not care whether portability is an aspect of modifiability or whether reliability is an aspect of security. Rather, we want to articulate what it means to achieve an attribute by identifying the yardsticks by which it is measured or observed. To do this, we introduce the concept of a “general scenario.” Each general scenario consists of : • the stimuli that requires the architecture to respond • the type of system elements involved in the response • the measures used to characterize the architecture’s response
  • 45.
    45 Quality Attribute Design QualityAttributes and General Scenarios (examples) For example: • A performance general scenario is: “Periodic messages arrive at a system. On the average, the system must respond to the message within a specified time interval.” This general scenario describes “events arriving” and being serviced with some “latency.” A general scenario for modifiability states “changes arriving” and the “propagation of the change through the system.” • A security general scenario combines “threats” with a response of “repulsion, detection, or recovery.” A usability general scenario ties “the user” together with a response of “the system performs some action.”
  • 46.
    46 Quality Attribute Design GeneralScenarios • If an architect or community wants to characterize a particular attribute, a general scenario can be developed to describe it. This addresses the first two problems that we claimed inhibited codifying the relationship between architecture and quality attributes ; that is, attribute definitions and their relationships. • General scenarios can apply to any system. For example, we can discuss a change to system platform without any knowledge of the system, because every system has a platform. We also can discuss a change to the representation of a data producer (in a producer/consumer relationship) without knowing anything about the type of data or the functions carried out by the producer. This is not the same as saying that every general scenario applies to every system.
  • 47.
    47 Quality Attribute Design GeneralScenarios (example) For example: • “The user desires to cancel an operation” is one usability general scenario stimulus. This is not the case for every system. For some systems, cancellation is not appropriate. • Thus, when applying general scenarios to a particular system, the first step is filtering the universe of general scenarios to determine those of relevance. This is essentially the process of determining attribute- specific requirements. Controlling (or at least significantly affecting) at least one measure of an attribute is the task of an architectural mechanism.
  • 48.
    48 Quality Attribute Design ArchitecturalMechanisms Earlier we stated that architectural mechanisms directly contribute to quality attributes. Examples of architectural mechanisms are the data router, caching, and fixed priority scheduling. These mechanisms help achieve specific quality attribute goals as defined by the general scenarios. Examples: • The data router protects producers from additions and changes to consumers and vice versa by limiting the knowledge that producers and consumers have of each other. This affects or contributes to modifiability. • Caching reduces response time by providing a copy of the data close to the function that needs it. This contributes to performance. • Fixed priority scheduling interleaves multiple tasks to control response time. This contributes to performance.
  • 49.
    49 Quality Attribute Design ArchitecturalMechanisms(cont.) • Each mechanism is targeted at one or more quality attributes. Also, each mechanism implements a more fundamental strategy to achieve the quality attribute. For example: – the data router uses indirection to achieve modifiability. – Caching uses replication to achieve performance. – Fixed priority scheduling uses pre-emptive scheduling to achieve performance.
  • 50.
    50 Quality Attribute Design ArchitecturalMechanisms(cont.) • We can also extend the list of mechanisms and general scenarios to address a wide range of situations. – If we introduce a new general scenario, we may need to introduce new mechanisms that contribute to it. – If we discover new mechanisms for dealing with existing general scenarios. • In either case, we can describe the new mechanism(s) without affecting any other mechanism.
  • 51.
    51 Quality Attribute Design Levelsof Abstraction for Mechanisms(cont.) Mechanisms exist in a hierarchy of abstraction levels. For example: – Separation is a mechanism that divides two coherent entities. This represents a high level of abstraction. – A more concrete mechanism, and lower level of abstraction, would specify entities, such as separating data from function, function from function, or data from data. • In this case, each level has meaning and utility with respect to modifiability. • Increasing detail increases the precision of analysis. However, because the goal is establishing design primitives, the highest level of abstraction that still supports some reasoning should be maintained.
  • 52.
    52 Quality Attribute Design Levelsof Abstraction for Mechanisms(cont.) What qualifies as an analysis? • By definition, an architectural mechanism helps achieve quality attribute goals. – There must be a reason why the mechanism supports those goals. Articulating the reason becomes our analysis. If we cannot state why a particular mechanism supports a quality attribute goal, then it does not qualify as a mechanism. – Choosing the correct level of granularity allows us to examine a property at its most fundamental level.
  • 53.
    53 Quality Attribute Design DesiredSide Effects and Attributes Mechanisms have two consequences. • They help general scenarios achieve attributes and they can influence other general scenarios as side effects. These two consequences, drive two types of analysis. – The first analysis will explain how the architectural mechanism helps the intended general scenario achieve its result. – The second analysis will describe how to refine the general scenario in light of the information provided by the mechanism. This analysis reveals side effects that the mechanism has on the other general scenarios. By referring to the specific mechanism that directly affects an attribute, a more detailed analysis of the side effect can be performed.
  • 54.
    54 Quality Attribute Design DesiredSide Effects and Attributes (cont.) Use encapsulation as our example. We will analyze the modifiability general scenarios that reflect changes in function, environment, or platform. • As long as changes occur inside the encapsulated function (that is, hidden by an interface), the effect of these changes will be limited. If the expected changes cross the encapsulation boundary, the effect of the changes is not limited. – The analysis/refinement for performance general scenarios seeks to determine whether encapsulation significantly affects resource usage of the encapsulated component. – The analysis/refinement for reliability/availability general scenarios asks whether encapsulation significantly affects the probability of failure of the encapsulated component. A more detailed description of the performance or reliability analyses would be found in appropriate performance or reliability mechanism write-ups.
  • 55.
    55 Quality Attribute Design DesiredSide Effects and Attributes (cont.) • Table 1 shows the relationship between mechanisms and general scenarios. If mechanisms are across the top, and general scenarios are down the left side, then each cell has one of three possibilities: 1. The mechanism contributes to this general scenario. 2. The mechanism has no affect on the general scenario. 3. The mechanism introduces side effects—questions to be answered by other mechanisms.
  • 56.
    56 Quality Attribute Design DesiredSide Effects and Attributes (cont.)
  • 57.
    57 Quality Attribute Design WorkProduct Templates So far, we have described two concepts in this note: 1. General scenarios that characterize quality attributes 2. Architectural mechanisms • In this section, the templates to document attribute stories and mechanisms and the data that each template section should contain are presented. • To understand how architectural mechanisms achieve quality goals, an understanding quality attributes themselves must first be obtained. An attribute story helps us do just that. It creates operational definitions for quality attributes.
  • 58.
    58 Quality Attribute Design WorkProduct Templates (cont.) • An attribute story presents the issues associated with an attribute and the mechanisms that could help achieve it. It is assumed that the architect already knows the material in the attribute story. The story simply acts as a checklist for the designer. Parts include: Start here
  • 59.
    59 Quality Attribute Design WorkProduct Templates (cont.) 1. Concepts of the Attribute – This section describes the attribute including relevant stimuli and typical response measures. 2. General Attribute Scenarios – This section presents a number of system-independent or general scenarios. General scenarios describe the attributes. These high-level scenarios could apply to any system. Depending on the general scenario, the system can be characterized by mechanisms, modules, or descriptions of the hardware platform.
  • 60.
    60 Quality Attribute Design WorkProduct Templates (cont.) 3. Strategies for Achieving Attribute – This section describes the basic strategies used to achieve the general scenarios. This sets the foundation for the next section of the template. 4. Enumeration of Mechanisms for Achieving Attribute. – This section lists the names and describes the mechanisms used to achieve the attribute. Each mechanism will have its own description. The brief descriptions here serve as a checklist for architects.
  • 61.
    61 Quality Attribute Design MechanismTemplates • A mechanism template provides information the architect needs to reason about a specific mechanism for the five attributes under discussion.
  • 62.
    62 Quality Attribute Design MechanismTemplates (cont.) 1. Description of the Mechanism – This section describes a particular mechanism. 2. Analysis of the Mechanism – This section describes the strategy that the mechanism used to achieve attribute goals. It also presents an argument that the mechanism does, in fact, support one or more general scenarios. The following sections apply to the attributes whose properties exist as side effects to the mechanism.
  • 63.
    63 Quality Attribute Design MechanismTemplates (cont.) 3. Modifiability General Scenarios – This section interprets modifiability general scenarios through the prism of the mechanism. Description of the Mechanism 4. Performance General Scenarios – This section interprets performance general scenarios through the prism of the mechanism. 5. Reliability/Availability General Scenarios – This section interprets reliability/availability general scenarios through the prism of the mechanism..
  • 64.
    64 Quality Attribute Design MechanismTemplates (cont.) 6. Security General Scenarios – This section interprets security general scenarios through the prism of the mechanism. 7. Usability General Scenarios – This section interprets usability general scenarios through the prism of the mechanism.
  • 65.
    65 Quality Attribute Design Mechanisms Mechanisms •Data Router • Fixed Priority Scheduling
  • 66.
    66 Quality Attribute Design Mechanisms(cont.) Data Router • The data router (also called a data bus or a data distributor) mechanism implements the strategy of data indirection. It inserts a mediator between a data item producer and consumer. By providing a mediator, the data producer has embedded knowledge that it must send the data item to the mediator. The mechanism that the data router uses to recognize consumers for a data item is discussed separately (see data registration). The following figure describes this mechanism.
  • 67.
  • 68.
    68 Quality Attribute Design Mechanisms(cont.) Data Router (cont.) • Producers are components that generate messages. Consumers are components that receive messages. It is possible for a component to both produce and consume different types of messages. The router directs messages of particular type to consumers of that type. We do not show the registration mechanism that the data router uses to identify producers and consumers. • Sometimes the data router also converts data. For example, suppose the data producer of a speed measurement supplies it in kilometers per hour and the consumer wishes to receive it in miles per hour. To perform the conversion, the data router must know the units in which the data item is produced and the units that the consumer desires.
  • 69.
    69 Quality Attribute Design Mechanisms(cont.) Purpose of this Mechanism • The data router mechanism reduces the coupling between the data producer and consumer. This simplifies the process of adding new consumers or producers of a single type of data. • The stimulus in the following general scenarios changes function, platform, or environment: • Adding/removing/modifying a producer/consumer of a data item within a known data type. Adding or removing a producer/consumer of a data item should not affect the producer, since the producers and consumers do know each other. The new consumer or producer must be registered to the data router. (Registration is outside the scope of this mechanism.) If the producer being removed is the only element that supplies that data type, then consumers will not receive any data. Presumably, that is the intent of the modification.
  • 70.
    70 Quality Attribute Design Mechanisms(cont.) Purpose of this Mechanism (cont.) • Adding a new data type to a producer of a data item. Adding a new producer data type should not affect any existing consumers since they can remain ignorant of the new data and its producers. The mediator may need to be modified if it has to transform the new data type. None of the existing producers are also affected by adding a new producer. • Adding a new conversion type to the data router. The data router needs to be modified to perform the new conversion. Existing conversions and producers and consumers are not modified. • Modifying the data router. The effect of modifying the data router depends, obviously, on the type of modification. Adding a new data type should not affect existing data types. Modifying the interfaces of the data router will require all producers and consumers to be modified.
  • 71.
    71 Quality Attribute Design Mechanisms(cont.) Performance General Scenario: • What is the mean/largest/minimum size (bytes)/distribution of data items? • Can multiple messages arrive at the mediator simultaneously? • What is the maximum/mean number of consumers for each data item? • What is the delay introduced by the mediator for the consumer of a data item? • Are messages produced periodically (at what period) or episodically (at what rate and distribution)? • Are the producers and consumers in the same process or in different processes? • If the producers and consumers are in different processes (or on different processors) what is the latency of message delivery? What are the resources involved in message delivery? • How are producers and consumers synchronized?
  • 72.
    72 Quality Attribute Design Mechanisms(cont.) Security General Scenario: • Can eavesdroppers listen to the transmission of any data items to/from the data router? • Is it possible for unauthorized producers to send messages to the editor or for unauthorized consumers to receive messages from the mediator? Availability/Reliability General Scenario: • What is the likelihood/impact if a produced message does not arrive? • What is the likelihood/impact of a message not being produced in time?
  • 73.
    73 Quality Attribute Design Mechanisms(cont.) Usability General Scenario: • What data items sent to the mediator should be visible to the user? • What feedback should the user receive about the messages sent to/from the mediator? • What needs to be recorded in order to cancel, undo, or to evaluate the system for usability?
  • 74.
    74 Quality Attribute Design Mechanisms(cont.) Fixed Priority Scheduling • Process scheduling is a common mechanism provided by most operating systems. It automatically interleaves execution of units of concurrency (such as processes and/or threads). Fixed priority scheduling is a uni- processor scheduling policy that prioritizes each process/thread in the system. The scheduler uses this priority to decide which process to execute. It assigns the processor to the highest priority process/thread that is not currently blocked. • The following figures presents a module view of a fixed priority scheduler and a sequence diagram of a fixed priority scheduler with two tasks. The high-priority task executes until it blocks. The next priority task then executes until the high-priority task is ready to execute again.
  • 75.
    75 Quality Attribute Design Mechanisms(cont.) Fixed Priority Scheduling (cont.)
  • 76.
    76 Quality Attribute Design Mechanisms(cont.) Fixed Priority Scheduling (cont.)
  • 77.
    77 Quality Attribute Design Mechanisms(cont.) Purpose of this Mechanism • The purpose of any scheduling mechanism is two-fold: 1. separate the management of concurrency and the concomitant performance (i.e., timing) concerns from the concerns of implementing of functionality. This allows the architect to design a structure that reflects the “natural concurrency” of the system’s environment (such as handling multiple message types, or sensor types that have disparate timing properties). 2. allocate a single processor to event stimuli in the manner required by their timing requirements. • Fixed priority scheduling allows for a level of predictability that corresponds to the timing estimates associated with the various processing scenarios. This predictability is due to scheduling analysis techniques (such as rate monotonic analysis).
  • 78.
    78 Quality Attribute Design Mechanisms(cont.) Purpose of this Mechanism (cont.) • The purpose of any scheduling mechanism is two-fold: 1. separate the management of concurrency and the concomitant performance (i.e., timing) concerns from the concerns of implementing of functionality. This allows the architect to design a structure that reflects the “natural concurrency” of the system’s environment (such as handling multiple message types, or sensor types that have disparate timing properties). 2. allocate a single processor to event stimuli in the manner required by their timing requirements. • Fixed priority scheduling allows for a level of predictability that corresponds to the timing estimates associated with the various processing scenarios. This predictability is due to scheduling analysis techniques (such as rate monotonic analysis).
  • 79.
    79 Quality Attribute Design Mechanisms(cont.) Purpose of this Mechanism (cont.) • The set of general scenarios addressed by this mechanism are: <Events arrive periodically or sporadically> and must be completed <in the worst-case > <within a time interval and/or subjected to data loss constraints> <while executing on a uniprocessor>. • We will make two assumptions for this analysis: 1. The dominant character of processes are deterministic. 2. The only effects of concern for this mechanism derive from allocating the processor to processes. (The effects of processes sharing data, acquiring memory, or using hardware devices are not considered here.)
  • 80.
    80 Quality Attribute Design Mechanisms(cont.) Purpose of this Mechanism (cont.) • Simplest case: • In this simple case, the processes are the only sources of execution time, the processes are pre-emptible by higher priority processes, and no pipelines exist in the topology. • As you can see, only two factors affect the latency of a process, (1) its own execution time and (2) preemption due to higher priority processes. Latency can be computed by determining the fixed point of the following expression where, • Ci is the execution time of process, i the process whose latency is being computed, and the term with the summation computes the effects of preemption:
  • 81.
    81 Quality Attribute Design Mechanisms(cont.) Performance Considerations • The following questions are specifically relevant to fixed priority scheduling: 6. What is the execution time of processes? Is the maximum known? 7. What are other sources (other than the processes) of execution time (such as context switch, dispatching, or daemons for garbage collection)? 8. How many priority levels does the scheduler support? 9. What are the sources of interrupts or non-pre-emptible sections?
  • 82.
    82 Quality Attribute Design Mechanisms(cont.) Performance Considerations (cont.) 6. How often do the processes execute? Is there a minimum amount of time between executions? 7. What is the flow of control/data between processes? 8. Are there various modes in which the above characteristics change? 9. What is the stochastic character of processing frequency and execution time?
  • 83.
    83 Quality Attribute Design Mechanisms(cont.) Modifiability Considerations • In general, any of the performance parameters mentioned in the preceding section might change. Therefore, the following questions could be asked: 1. What is the impact of adding or deleting new processes? 2. What is the impact of changing process priorities? 3. What is the impact of changing process execution times? 4. What is the impact of changing event arrival rates? 5. What is the impact of switching operating systems?
  • 84.
    84 Quality Attribute Design Mechanisms(cont.) Reliability Considerations 1. How can errors propagate between processes? 2. Do processes run in their own address space? 3. What happens if processes exceed their so-called worst-case execution estimates?
  • 85.
    85 Quality Attribute Design Mechanisms(cont.) Security Considerations 1. Can processes be killed and then spoofed? 2. Can new processes be created (with the intent of causing denial of service)?
  • 86.
    86 Quality Attribute Design Mechanisms(cont.) Usability Considerations 1. How is average response time managed? 2. Do any of the processes generate data from which the user gets feedback? 3. Is consideration of this feedback data included in the setting of deadlines?
  • 87.
    87 Quality Attribute Design AttributeStories • The Modifiability Story • The Performance Story
  • 88.
    88 Quality Attribute Design ModifiabilityStory (cont.) Concepts of Modifiability Story • Modifiability is the ability of a system to be changed after it has been deployed. In this case, the stimulus is the arrival of a change request and the response is the time necessary to implement the change. Requests can reflect changes in functions, platform, or operating environment. A request could also modify how the system achieves its quality attributes. These changes can involve the source code or data files. They can be made at compile time or run time.
  • 89.
    89 Quality Attribute Design ModifiabilityStory (cont.) A request arrives to change the functionality of the system. The change can be to add new functionality, to modify existing functionality, or to delete a particular functionality. • The hardware platform is changed. The system must be modified to continue to provide current functionality. There may be a change in hardware including input and output hardware, operating system, or COTS middleware. • The operating environment is changed. For example, the system now has to work with systems previously not considered. It may have to operate in a disconnected mode. It may have to dynamically discover what devices are available, or it may have to react to changes in the number or characteristics of users. • A request arrives to improve a particular quality attribute such as reliability, performance, or usability. The system should be modified to achieve better usability, for instance. • A new distributed user arrives and wishes to utilize just some of the services available. • As a result of quality considerations, some portion of the system modifies its behavior during run time.
  • 90.
    90 Quality Attribute Design ModifiabilityStory (cont.) The general form for modifiability scenarios is: Time of modification ::= Compile time | run time Type of modification ::= Add | modify | delete Target of modification ::= Functionality | platform (hardware, OS, middleware) | environment (interoperate, disconnect, #users), quality attribute
  • 91.
    91 Quality Attribute Design ModifiabilityStory (cont.) Strategies that achieve modifiability: • At compile time, modifiability is achieved through three basic strategies. 1. Indirection – Indirection involves inserting a mediator, such as a data router, to allow variation within a particular area of concern. For example, when producers and consumers of data communicate via a mediator, they typically have no direct knowledge of each other. This means that additional producers or consumers could be added without change to the consumers or producers. A virtual device or machine can also serve as a mediator in the indirection strategy. 2. Separation – This strategy separates data and function that address different concerns. Since the concerns are separate, we can modify one concern independently of another. Isolating common function is another example of a separation strategy.
  • 92.
    92 Quality Attribute Design ModifiabilityStory (cont.) 3. Encoding function into data meta-data and language interpreters – By encoding some function into data and providing a mechanism for interpreting that data, we can simplify modifications that affect the parameters of that data. • At run time, modifiability is achieved using a strategy of dynamic discovery and reconfiguration. This strategy involves discovering the resources or functions that become dynamically available as the system’s environment changes, negotiating for access to those resources, then accessing the resource.
  • 93.
    93 Quality Attribute Design ModifiabilityStory (cont.) Mechanisms that achieve Modifiability • data router (Also called a data bus or a data distributor) – This intermediary mechanism directs (and possibly transforms) data from producers to consumers. It requires a supporting mechanism to identify producers and consumers. • data repository – This mechanism stores data for later use. • virtual machine – This mechanism places an intermediary between function users and producer(s).
  • 94.
    94 Quality Attribute Design ModifiabilityStory (cont.) Mechanisms that achieve Modifiability (cont.) • independence of interface from implementation – This mechanism allows architects to substitute different implementations for the same functionality. Interface definition languages (IDLs) are an example of this mechanism. • interpreter – This mechanism involves encoding function into parameters and more abstract descriptions to allows architects to easily modify them. • client-server – This mechanism involves providing a collection of services from a central process and allowing other processes to use these services through a fixed protocol.
  • 95.
    95 Quality Attribute Design ModifiabilityStory (cont.) Mechanisms that achieve Modifiability (cont.) • independence of interface from implementation – This mechanism allows architects to substitute different implementations for the same functionality. Interface definition languages (IDLs) are an example of this mechanism. • interpreter – This mechanism involves encoding function into parameters and more abstract descriptions to allows architects to easily modify them. • client-server – This mechanism involves providing a collection of services from a central process and allowing other processes to use these services through a fixed protocol.
  • 96.
    96 Quality Attribute Design ModifiabilityStory (cont.) Mechanisms that achieve Modifiability (cont.) • independence of interface from implementation – This mechanism allows architects to substitute different implementations for the same functionality. Interface definition languages (IDLs) are an example of this mechanism. • interpreter – This mechanism involves encoding function into parameters and more abstract descriptions to allows architects to easily modify them. • client-server – This mechanism involves providing a collection of services from a central process and allowing other processes to use these services through a fixed protocol.
  • 97.
    97 Quality Attribute Design PerformanceStory Concepts of Performance • Performance involves adjusting and allocating resources to meet system timing requirements. The stimulus can be any event that requires a response, such as the arrival of a message, the expiration of an interval of time (as signaled by a timer interrupt), the detection of a significant change in the system’s environment, a user input event, etc. • In this situation, the system consists of a collection of processors connected by a network, involving “schedulable entities” such as processes, threads, and messages.
  • 98.
    98 Quality Attribute Design PerformanceStory (cont.) Concepts of Performance • Important response measures include –Latency – the amount of time that passes from the event occurrence until the completed –Response. Average-case latency matters and worst-case latency are variants of latency measures –Throughput – the number of events responded to per unit of time –Precedence – preserving ordering constraints on the responses to events
  • 99.
    99 Quality Attribute Design PerformanceStory (cont.) Concepts of Performance (cont.) • Jitter – variance in the time interval between response completions • Growth capacity – ability to add load while meeting timing requirements At a higher level, performance concerns the ability to predictably provide various levels of service (measured in the above terms) under varying circumstances. Performance depends upon the load to the system and the resources available to process the load.
  • 100.
    100 Quality Attribute Design PerformanceStory (cont.) The general form of general performance scenarios is • <Events arrive periodically, sporadically or randomly> and must be completed <in the worst-case or on the average> <within a time interval and/or subjected to data loss constraints and/or subjected to throughput constraints and/or constrained by order relative to other event responses and/or adhering to jitter constraints> • A sample of general performance scenarios assuming unchanging resource requirements follows: • Periodic or sporadic events (that is, events with bounded inter-arrival times) arrive. In all cases, the system must execute its response to the event within a specified time interval. (One common example is the time interval that starts when the event occurs and ends one period later.) • Periodic or sporadic events arrive. In all cases, no more than n/m events can be lost. • Periodic or sporadic events arrive. On average the system must execute its response to the event within a specified time interval..
  • 101.
    101 Quality Attribute Design PerformanceStory (cont.) The general form of general performance scenarios is (cont.) • Periodic or sporadic events arrive. On average no more than n/m events can be lost. • Events arrive stochastically. On average the system must execute its response to the event within a specified time interval. The response, however, should never be greater than a specified upper bound. • Events arrive stochastically. On average the system must complete n responses per unit time.
  • 102.
    102 Quality Attribute Design PerformanceStory (cont.) The general form of general performance scenarios is (cont.) • We can extend the above general performance scenario template to include cases where the resource requirements change. These cases include • Periodic or sporadic events arrive. • In all cases, the system must carry out its response to the event within a specified time interval by executing n nodes in a network, where each node consists of m processors on a bus and each processor is executing p processes. If a subset of the resources fails, the most critical subset of event responses must continue to be met.
  • 103.
    103 Quality Attribute Design PerformanceStory (cont.) The most general form for a performance scenario considers many or all of the following aspects: • Arrival pattern ::= periodic | sporadic | random • Latency requirement ::= worst-case | average-case [with upper bound] | window • Throughput requirement ::= n completions per unit time • Data loss requirement ::= no more than m of n can be lost • Platform ::= uniprocessor | distributed • Resources ::= processors, networks, buses, memory • Mode ::= (resource driven | load driven), (discrete | continuous)
  • 104.
    104 Quality Attribute Design PerformanceStory (cont.) Strategies that achieve performance: • Performance is achieved by managing resources (such as CPUs, buses, networks, memory, data, etc.) that affect system response. That is, timing behavior is determined by allocating resources to resource demands, choosing between conflicting requests for resources, and managing resource usage. Therefore, there are three key techniques for managing performance: 1. Resource allocation – These are policies for realizing resource demands by allocating various resources; for example, deciding which of several available processors and process will execute.
  • 105.
    105 Quality Attribute Design PerformanceStory (cont.) Strategies that achieve performance (cont.): 2. resource arbitration – These are policies for choosing between various requests of a single resource; for example, deciding which of several processes will execute on a processor. Arbitration includes preemptive strategies and exclusive use strategies. 3. Resource usage – These are strategies for affecting resource demands; for example, deciding how to reduce the execution time needed to respond to one or more events, or deciding not to execute certain tasks in periods of intense computation of high-priority tasks. Each of these aspects affects the completion time of the response. As a result, each aspect can be viewed as a category of mechanism.
  • 106.
    106 Quality Attribute Design PerformanceStory (cont.) Mechanisms that Achieve Performance (cont.): In general, performance can be achieved through resource allocation, resource arbitration, and resource use. Resource allocation 1. Load balancing – spreading the load evenly between a set of resources 2. Bin packing – based on measures of spare capacity (for example, schedulable utilization) Bin packing maximizes the amount of work that can be performed by a collection of resources.. Resource arbitration (Preemptive) 1. Dynamic priority scheduling – allows the priority of a process to vary from one Invocation to the next (e.g., earliest deadline first scheduling)
  • 107.
    107 Quality Attribute Design PerformanceStory (cont.) Mechanisms that Achieve Performance (cont.): 3. fixed priority scheduling – the priority of processes are fixed from one invocation to the next (e.g., rate monotonic analysis or deadline monotonic) 4. Aperiodic service strategies – mechanisms for scheduling aperiodic processes in a periodic context (e.g., slack stealing) 5. Static scheduling – timelines which are calculated prior to runtime (e.g., cyclic executive) 6. Queue overwrite policy – mechanism for handling overflow in a bounded queue (e.g. Overwrite oldest) 7. QoS-based methods – methods that explicitly consider quality of service in scheduling decisions (e.g., QRAM) 8. Overload management/load shedding – mechanisms that determine which processes receive resources when the system cannot service all requests
  • 108.
    108 Quality Attribute Design PerformanceStory (cont.) Mechanisms that Achieve Performance (cont.): (Exclusive use) 9. Synchronization policies – mechanisms for managing mutually exclusive access to shared resources (e.g., priority inheritance) Resource use Caching – using a local copy of data to reduce access time
  • 109.
  • 110.
    110 Defect Prevention • DefectPrevention is the activities required to isolate and correct software design and implementation issues that could lead to a defect in the software that will require rework to make the software operational at the quality level specified.
  • 111.
    111 Defect Prevention • Formaldefect prevention activities should be embedded in the software life-cycle process, which will empower the development team and the SQA organization to analyze the causes of defects, and ensure corrective process improvements, to prevent future defect insertion and enhance the detection process, are implemented.
  • 112.
    112 Defect Prevention • Defectprevention directly relates to the quality of the development process. • The degree of prevention is dependent on the degree of process improvement accomplished throughout the development. • In General it is found that 34% of a project’s effort is spent designing software, 12% on coding, and 22% on testing. The remaining 44% was spent fixing problems that can been avoided with better upfront design.
  • 113.
  • 114.
  • 115.
    115 Defect Prevention • Defectshappen! If we cannot build defect free software the first-time, then the next best way to build good software is to learn from our mistakes and build it right the next time. • Defect causal analysis is the best method for improving software quality from the SQA perspective. It is an orderly technique for identifying causes of problems and preventing defects resulting from them. Causal analysis focuses on both defect discovery and on what caused the defect to occur. • The importance of this SQA approach is that it provides the necessary feedback for the improvement of tool use, software engineering training and education, and ultimately, the development process itself.
  • 116.
    116 Defect Prevention • Perhapsthe most important attribute of software quality assurance is defect causal analysis. Quality software demands that defects be eliminated and prevented, not only from all products delivered to users, but also from all the processes that create those products. • Finding and correcting mistakes makes up an inordinately large portion of total software development cost. As we have seen the level of effort for rework to correct defects is typically 40% to 50% of total software development.
  • 117.
    117 Defect Prevention • Perhapsthe most important attribute of software quality assurance is defect causal analysis. Quality software demands that defects be eliminated and prevented, not only from all products delivered to users, but also from all the processes that create those products. • Finding and correcting mistakes makes up an inordinately large portion of total software development cost. As we have seen the level of effort for rework to correct defects is typically 40% to 50% of total software development.
  • 118.
    118 Defect Prevention • Thefollowing figure illustrates the causal analysis process: – At the start of each development phase, an entrance conference is held to review deliverables from the previous phase, to review process methodology guidelines and standards, and to set the team quality goals. These exercises start the defect prevention process, since they concentrate the team’s attention on the process-oriented development details soon to be performed. – Throughout each phase, audits, reviews, and formal peer inspections are scheduled, in addition to walkthroughs by corporate management. During these reviews, each work-product is checked for defects and rework is performed (as required) with follow-up reviews.
  • 119.
  • 120.
    120 Defect Prevention • Themost important activity during this process is the preliminary causal analysis of the defects themselves. • During causal analysis meetings, individual problems are isolated and analyzed (somewhere between 1-20 defects can be scrutinized). • Each defect item is then added to a database where it is tracked by its description, how it was resolved, and the preliminary analysis of its cause. • Another meeting is held at the end of the development phase which consists of a brainstorming session. The purpose of this meeting is to analyze defect causes, evaluate results versus goals set at the start of the phase, and to develop a list of suggested process improvements.
  • 121.
    121 Defect Prevention • TheSQA team (e.g., people from the tools, training, development, support, and testing group) must then respond to these suggestions. The process action team is responsible for: – Action item prioritization, – Action item status tracking, – Action item implementation, – Dissemination of feedback, – Causal analysis database administration, – Analysis of defects for generic classification, – and success story visibility.
  • 122.
    122 Defect Prevention • Ina sample program a causal analysis tool is used to predict the total number of defects in core software modules and the detection rate for finding those problems. • Use of the model has greatly reduced software defect insertion over the span software project The figures illustrates how the defect insertion rate progressively decreased by 1/3rd between production Blocks 30 and 40 and again between Blocks 40 and 50.
  • 123.
    123 Defect Prevention (Block 30BSoftware Defect Density = 9.8 Defects/thousand delivered source instructions (KDSIs)) Block 40B Software Defect Density = 1/3 reduction in software defect insertion rate from Block 30B Block 50B Software Defect Density = 1/3 reduction in software defect insertion rate from Block 40B
  • 124.
    124 Defect Prevention Defect RemovalEfficiency • An anomaly about defect detection in large software-intensive systems was identified in studies performed by major corporations. The aberration about the defects they found was that they were not randomly distributed throughout software applications. • Defects become clumped into localized sections, called “defect-prone modules.” As a rule, about 5% of the modules in large software systems will be infected with almost 50% of reported defects. • Once those modules are identified, the defects are usually removable. High complexity and poor coding procedures are often the cause of defect coagulations in defect-prone modules. • One useful product of defect measurement is called “defect removal efficiency.” This cumulative measure is defined as the ratio of defects found prior to delivery of the software system to the total number of defects found throughout development • [JONES91] If modules are kept small (e.g., 100 SLOC), it is often cheaper and faster to just rewrite the module rather than search for and remove its defects..
  • 125.
    125 Defect Prevention Defect RemovalEfficiency (cont.) • A product of defect measurement is called “defect removal efficiency.” This cumulative measure is defined as the ratio of defects found prior to delivery of the software system to the total number of defects found throughout development • It can be seen that small modules are often cheaper and faster to just rewrite rather than search for and remove its defects. Defect movalEfficiency DefectsFound iorToDelivery TotalDefectsFound Re Pr 
  • 126.
    126 Defect Prevention Defect RemovalEfficiency (cont.) • The defect removal efficiency indicator gives the cumulative percent of how many previously injected defects have been removed by the end of each development phase. • Since the cost of defect removal roughly doubles with each phase, early removal must be a process improvement priority. With this goal, each newly delivered software product can be expected to be of significantly higher quality than its predecessor. • Increases in defect removal efficiency can be accomplished through improving the code inspection and unit testing processes.
  • 127.
    127 Defect Prevention If thedefect cannot be detected and does not show itself as output, why bother removing it? • With mission and safety critical software operating under maximum stressed conditions, the chances of a latent defect-related software failure often increases beyond acceptable limits. • This dichotomy amplifies the need to detect, remove, and ultimately prevent the causes of errors before they become illusive software defects. • Latent, undetected defects have the tendency to crop up when the software is stressed beyond the scope of its developmental testing. It is at these times, when the software is strained to its maximum performance, that defects are the most costly or even fatal.
  • 128.
    128 Defect Prevention • Forsystems where failure to remove defects before delivery can have catastrophic consequences in terms of failed missions or the loss of human life, defect removal techniques must be decidedly intense and aggressive. • For instance, because the lives of astronauts depend implicitly on the reliability of Space Shuttle software, the software defect removal process employed on this program has had a near perfect record. Of the 500,000 lines-of-code for each of the six shuttles delivered, there was a zero-defect rate of the mission-unique data tailored for each shuttle mission, and the source code software had 0.11 defects per KLOC.
  • 129.
    129 Defect Prevention Space ShuttleDefect Removal Process Improvement Process Element A Process Element B Process Element C Process Element D Product 2 ROOT CAUSE DEFECT INTRODUCED 3 DEFECT ESCAPED DETECTION 3 DEFECT ESCAPED DETECTION 4 SIMILAR ADDITIONAL UNDETECTED DEFECTS ORIGINAL DEFECT1STEPS PERFORMED FOR EVERY DEFECT (REGARDLESS OF MAGNITUDE) (1) Remove defect (2) Remove root cause of defect (3) Eliminate process escape deficiency (4) Search/analyze product for other, similar escapes
  • 130.
    130 Defect Prevention • Studiesshow that 31.2% of the defects found during system testing are inserted during the requirements analysis and design phases, 56.7% are inserted during coding, and 12.1% are inserted during unit testing and integration. • The number of defects at the completion of unit testing, and the number of defects delivered to the field, are well documented industry averages.
  • 131.
    131 Defect Prevention Requirements DesignCode Unit Test Integration Test System Test 20 40 100 20 50 10 Defects per KLOC Fielded defects per KLOC Requirements Design Code Unit Test Integration Test System Test 5 (20) 8 (40) 15 (100) 3 (20) 7 (50) 1 (10) Reduced Defects per KLOC Fielded defects per KLOC Inspection Inspection Inspection Industry Average Defect Profile Defect Profile with Inspections 60% of software defects can be traced back to the design process.
  • 132.
    132 Defect Prevention • Inaddition to the gains in quality, inspections produce corresponding gains in productivity as the amount of rework needed to fix defects is greatly reduced. PEOPLERESOURCES Planning Requirements Design Coding Testing Delivery SCHEDULE Without inspections With inspections
  • 133.
    133 Defect Prevention • Whatis the Generic Defect Prevention Process? Establish and Maintain Defect Prevention Mechanism Perform Defect Prevention Activities Status Sponsorship Infrastructure Oversight Implement Organizational Actions Mechanism Status Sponsorship Infrastructure Oversight Organizational Issues Organizational Issues Organizational Issues Organizational Learning Organization’s Process Assets Process Process Improvement Project’s Process Assets Process Process Improvement
  • 134.
  • 135.
    135 Key Software Metrics ARigorous Framework for Software Measurement • The software engineering literature abounds with software ‘metrics' falling into one or more of the categories described above. So somebody new to the area seeking a small set of 'best' software metrics is bound to be confused, especially as the literature presents conflicting views of what is best practice. In the section we establish a framework relating different activities and different metrics. The framework enables readers to distinguish the applicability (and value) of many metrics. It also provides a simple set of guidelines for approaching any software measurement task.
  • 136.
    136 Key Software Metrics Measurementis the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to characterize them according to clearly defined rules. The numerical assignment is called the measure. • The theory of measurement provides the rigorous framework for determining when a proposed measure really does characterize the attribute it is supposed to. • The theory also provides rules for determining the scale types of measures, and hence to determine what statistical analyses are relevant and meaningful. We make a distinction between a measure (in the above definition) and a metric. • A metric then is a proposed measure. Only when it really does characterize the attribute in question can it truly be called a measure of that attribute.
  • 137.
    137 Key Software Metrics •Some Key Metrics – Complexity Metrics – Resource estimation models – Metrics of Functionality: Albrecht's Function Points – Incident Count Metrics
  • 138.
    138 Key Software Metrics ComplexityMetrics • Prominent in the history of software metrics has been the search for measures of complexity. Because it is a high-level notion made up of many different attributes, there may never be a single measure of software complexity. There have been hundreds of proposed complexity metrics. Most of these are also restricted to code. The best known are Halstead's software science and McCabe's cyclomatic number
  • 139.
    139 Key Software Metrics ComplexityMetrics (cont.) Halstead’s software science metrics A program P is a collection of tokens, classified as either operators or operands. n1 = Number of unique operators n2 = Number of unique operands N1 = Total occurrence of operators N2 = Total occurrence of operands Length of P = N1 + N2 Vocabulary of P = n1 +n 2
  • 140.
    140 Key Software Metrics ComplexityMetrics (cont.) • Halstead’s software science metrics (cont.) Theory: Estimate of N is = n1 log n1 + n2 log n2 Theory: Effort required to generate P is Theory: Time required to program P is T = E/18 Seconds N^ E = n1 N2 N log n 2 N2
  • 141.
    141 Key Software Metrics ComplexityMetrics (cont.) Computing McCabe’s cyclomatic number
  • 142.
    142 Key Software Metrics ResourceEstimation Models • COCOMO Most resource estimation models assume the form: Simple COCOMO model (Thousands of Delivered Source Instructions).
  • 143.
    143 Key Software Metrics ResourceEstimation Models (cont) COCOMO (cont.) • The COCOMO model comes in three forms: simple, intermediate and detailed. The simple model is intended to give only an order of magnitude estimation at an early stage. However, the intermediate and detailed versions differ only in that they have an additional parameter which is a multiplicative "cost driver" determined by several system attributes. • To use the model you have to decide what type of system you are building: – Organic: refers to stand-alone in-house DP systems – Embedded: refers to real-time systems or systems which are constrained in some way so as to complicate their development – Semi-detached: refers to systems which are "between organic and embedded" (Thousands of Delivered Source Instructions).
  • 144.
    144 Key Software Metrics ResourceEstimation Models (cont) COCOMO (cont.) • The COCOMO type approach to resource estimation has two major drawbacks, both concerned with its key size factor KDSI: • KDSI is not known at the time when estimations are sought, and so it also must be predicted. This means that we are replacing one difficult prediction problem (resource estimation) with another which may equally as difficult (size estimation) • KDSI is a measure of length, not size (it takes no account of functionality or complexity) (Thousands of Delivered Source Instructions).
  • 145.
    145 Key Software Metrics ResourceEstimation Models (cont.) Albrecht's Function Points • Albrecht's Function Points [Albrecht 1979] (FPs) is a popular product size metric (used extensively in the USA and Europe) that attempts to resolve these problems. FPs are supposed to reflect the user's view of a system's functionality. The major benefit of FPs over the length and complexity metrics discussed above is that they are not restricted to code. In fact they are normally computed from a detailed system specification, using the equation FP=UFC ´ TCF (Thousands of Delivered Source Instructions).
  • 146.
    146 Key Software Metrics ResourceEstimation Models (cont.) Albrecht's Function Points (cont.) FP=UFC ´ TCF • where UFC is the Unadjusted (or Raw) Function Count, and TCF is a Technical Complexity Factor which lies between 0.65 and 1.35. The UFC is obtained by summing weighted counts of the number of inputs, outputs, logical master files, interface files and queries visible to the system user, where: – an input is a user or control data element entering an application; – an output is a user or control data element leaving an application; – a logical master file is a logical data store acted on by the application user; – an interface file is a file or input/output data that is used by another application; – a query is an input-output combination (i.e. an input that results in an immediate output). (Thousands of Delivered Source Instructions).
  • 147.
    147 Key Software Metrics ResourceEstimation Models (cont.) Albrecht's Function Points (cont.) • Function points are used extensively as a size metric in preference to LOC. Thus, for example, they are used to replace LOC in the equations for productivity and defect density. There are some obvious benefits: FPs are language independent and they can be computed early in a project. FPs are also being used increasingly in new software development contracts. • One of the original motivations for FPs was as the size parameter for effort prediction. Using FPs avoids the key problem identified above for COCOMO: we do not have to predict FPs; they are derived directly from the specification which is normally the document on which we wish to base our resource estimates. • The major criticism of FPs is that they are unnecessarily complex. Indeed empirical studies have suggested that the TCF adds very little in practical terms. For example, effort prediction using the unadjusted function count is often no worse than when the TCF is added [Jeffery et al 1993]. FPs are also difficult to compute and contain a large degree of subjectivity. There is also doubt they do actually measure functionality. (Thousands of Delivered Source Instructions).
  • 148.
    148 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics • No serious attempt to use measurement for software QA would be complete without rigorous means of recording the various problems that arise during development, testing, and operation. • It is important for developers to measure those aspects of software quality that can be useful for determining – how many problems have been found with a product – how effective are the prevention, detection and removal processes – when the product is ready for release to the next development stage or to the customer – how the current version of a product compares in quality with previous or competing versions • The terminology used to support this investigation and analysis must be precise, allowing us to understand the causes as well as the effects of quality assessment and improvement efforts.. (Thousands of Delivered Source Instructions).
  • 149.
    149 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics (cont.) (Thousands of Delivered Source Instructions). One of the problems with problems is that the terminology is not uniform. If an organization measures its software quality in terms of faults per thousand lines of code, it may be impossible to compare the result with the competition if the meaning of "fault" is not the same. The software engineering literature is rife with differing meanings for the same terms. Below are just a few examples of how researchers and practitioners differ in their usage of terminology. Until terminology is the same, it is important to define terms clearly, so that they are understood by all who must supply, collect, analyze and use the data. Often, differences of meaning are acceptable, as long as the data can be translated from one framework to another.
  • 150.
    150 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics (cont.) • We describe the observations of development, testing, system operation and maintenance problems in terms of incidents, faults and changes. Whenever a problem is observed, we want to record its key elements, so that we can then investigate causes and cures. In particular, we want to know the following: 1. Location: Where did the problem occur? 2. Timing: When did it occur? 3. Mode: What was observed? 4. Effect: Which consequences resulted? 5. Mechanism: How did it occur? 6. Cause: Why did it occur? 7. Severity: How much was the user affected? 8. Cost: How much did it cost? (Thousands of Delivered Source Instructions).
  • 151.
    151 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics (cont.) Incidents • An incident report focuses on the external problems of the system: the installation, the chain of events leading up to the incident, the effect on the user or other systems, and the cost to the user as well as the developer. Thus, a typical incident report addresses each of the eight attributes in the following way. • Incident Report • Location: such as installation where incident observed - usually a code that uniquely identifies the installation and platform on which the incident was observed. • Timing: CPU time, clock time or some temporal measure. Timing has two, equally important aspects: real time of occurrence (measured on an interval scale), and execution time up to occurrence of incident (measured on a ratio scale). • Mode: type of error message or indication of incident (see below) • Effect: description of incident, such as "operating system crash," "services degraded," (Thousands of Delivered Source Instructions).
  • 152.
    152 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics (cont.) Incidents (cont.) • Mechanism: chain of events, including keyboard commands and state data, leading to incident. This application-dependent classification details the causal sequence leading from the activation of the source to the symptoms eventually observed. Unraveling the chain of events is part of diagnosis, so often this category is not completed at the time the incident is observed. • Cause: reference to possible fault(s) leading to incident. Cause is part of the diagnosis (and as such is more important for the fault form associated with the incident). Cause involves two aspects: the type of trigger and the type of source (that is, the fault that caused the problem). The trigger can be one of several things, such as physical hardware failure; operating conditions; malicious action; user error; erroneous report while the actual source can be faults such as these: physical hardware fault; unintentional design fault; intentional design fault; usability problem. • Severity: how serious the incident’s effect was for the service required from the system. Reference to a well-defined scale, such as "critical," "major," "minor". Severity may also be measured in terms of cost to the user. • Cost: Cost to fix plus cost of lost potential business. This information may be part of diagnosis and therefore supplied after the incident occurs. (Thousands of Delivered Source Instructions).
  • 153.
    153 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics (cont.) Faults • An incident reflects the user’s view of the system, but a fault is seen only by the developer. Thus, a fault report is organized much like an incident report but has very different answers to the same questions. It focuses on the internals of the system, looking at the particular module where the fault occurred and the cost to locate and fix it. A typical fault report interprets the eight attributes in the following way: Fault Report • Location: within-system identifier, such as module or document name. The IEEE Standard Classification for Software Anomalies, [IEEE 1992], provides a high-level classification that can be used to report on location. • Timing: phases of development during which fault was created, detected and corrected. Clearly, this part of the fault report will need revision as a causal analysis is performed. It is also useful to record the time taken to detect and correct the fault, so that product maintainability can be assessed. • Mode: type of error message reported, or activity which revealed fault (such as review). The Mode classifies what is observed during diagnosis or inspection. The IEEE standard on software anomalies, [IEEE 1992], provides a useful and extensive classification that we can use for reporting the mode. (Thousands of Delivered Source Instructions).
  • 154.
    154 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics (cont.) Faults (cont.) • Effect: failure caused by the fault. If separate failure or incident reports are maintained, then this entry should contain a cross-reference to the appropriate failure or incident reports. • Mechanism: how source was created, detected, corrected. Creation explains the type of activity that was being carried out when the fault was created (for example, specification, coding, design, maintenance). Detection classifies the means by which the fault was found (for example, inspection, unit testing, system testing, integration testing), and correction refers to the steps taken to remove the fault or prevent the fault from causing failures. • Cause: type of human error that led to fault. Although difficult to determine in practice, the cause may be described using a classification suggested by Collofello and Balcom: a) communication: imperfect transfer of information; b) conceptual: misunderstanding; or c) clerical: typographical or editing errors • Severity: refer to severity of resulting or potential failure. That is, severity examines whether the fault can actually be evidenced as a failure, and the degree to which that failure would affect the user • Cost: time or effort to locate and correct; can include analysis of cost had fault been identified during an earlier activity (Thousands of Delivered Source Instructions).
  • 155.
    155 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics (cont.) Changes • Once a failure is experienced and its cause determined, the problem is fixed through one or more changes. These changes may include modifications to any or all of the development products, including the specification, design, code, test plans, test data and documentation. Change reports are used to record the changes and track the products most affected by them. For this reason, change reports are very useful for evaluating the most fault-prone modules, as well as other development products with unusual numbers of defects. A typical change report may look like this: Change Report • Location: identifier of document or module affected by a given change. • Timing: when change was made • Mode: type of change (Thousands of Delivered Source Instructions).
  • 156.
    156 Key Software Metrics ResourceEstimation Models (cont.) Incident Count Metrics (cont.) Changes (cont.) – Effect: success of change, as evidenced by regression or other testing – Mechanism: how and by whom change was performed – Cause: corrective, adaptive, preventive or perfective – Severity: impact on rest of system, sometimes as indicated by an ordinal scale – Cost: time and effort for change implementation and test (Thousands of Delivered Source Instructions).
  • 157.
    157 Software Quality Assurance Frameworkof Quality Assurance Attributes for Process Improvement
  • 158.
    158 Software Quality Assurance •Business is caught up in a massive restructuring not seen since the second industrial revolution, which introduced the factory system and dramatically changed all aspects of our lives. Resulting in a paradigm shift that requires a new context for leadership and management practice in all sectors of our economy. • We cannot stop it. We cannot avoid it. We can, however, tap into the power generated by the change forces at work here and use it to transform our company in ways that will take full advantage of the capabilities inherent in an information age economy. • The term information age is no more restricted to computers and data than the term industrial age is limited to machines and materials. The information age concept is all-encompassing and affects all facets of our culture, society, and economy.
  • 159.
    159 Software Quality Assurance •Process Improvement science/engineering includes the methodologies and best practices proven to encourage and help organizations to change the way they not only produce products but also the way they manage the people who are producing those products. Elements of Process Improvement Methodology • A Framework for Managing Process Improvement (Framework) is the response to the need for an overarching methodology. The Framework consists of a comprehensive methodology for performing process improvement projects and is applicable in all areas of the software industry. – Continuous Process Improvement (CPI) that reduces variation in the quality of output products and services and incrementally improves the flow of work within a functional activity. – Business Process Redesign (BPR) that removes non-value added activities from processes, improves our cycle-time response capability, and lowers process costs. – Business Process Reengineering (BPR) that radically transforms processes through the application of enabling technology to gain dramatic improvements in process efficiency, effectiveness, productivity, and quality.
  • 160.
    160 Continuous Process Improvement(CPI). • Continuous process improvement is most closely associated with the Total Quality Management (TQM) discipline. The traditional approach is to empower self-managed teams to make task-level improvements in quality, cycle time, and cost. Improvements are incremental and sustained. They are creative responses to the constant need to get the job done in changing circumstances. CPI actions typically are wholly contained within one functional activity, although cross-functional teams can be organized to deal with chronic or pervasive situations. • To use an analogy, the objective of a CPI team is to tend to one or two trees in the forest.
  • 161.
    161 Business process redesign(BPR). • Business process redesign is the next level of improvement. BPR actions are undertaken in a project context with planned or specific improvement objectives. The focus is on streamlining processes by detecting and eliminating non-value added process time and costs, and incorporating best practices in whole or in part. Maximize improvement in quality with respect to output products and services is the objectives of BPR. • Cross-functional teams work together to model and analyze processes (AS-IS condition), and design a TO-BE condition that maximizes quality and production capabilities. There is a strong reliance on capturing the experience base of project participants as the basis for generating improvement ideas. While these techniques often produce significant improvement ideas, the ideas are limited because of the insider perspective of members of the improvement team. • To continue the analogy, the forest is managed in spite of all the trees.
  • 162.
    162 Business process reengineering(BPR). • Process reengineering is often undertaken in response to dramatic changes in the external environment (a paradigm shift, for instance) that apply considerable pressure on the ability of the organization to fulfill its mission, improve its competitive positioning, or to even survive as an entity. Virtually all functions within the organization are affected by BPR actions. The existing organizational and technological infrastructures are subject to major dislocations, and pressure is applied to the very culture of the organization. • BPR actions are initiated and guided by senior leadership. • New performance measures must be developed to reflect these changes. Assuming that the objective of BPR is to produce quantum leaps in quality, cycle time, and cost efficiency; baseline measurements and data will be of little use (and may inhibit) the search for new, more meaningful measures. • To complete the analogy, the objective of the BPR team is to create a new forest with sturdier and more valuable trees.
  • 163.
    163 Three organizational elementsinvolved in process improvement. • At all three levels are process, people, and technology. – At the CPI level, people and how they perform their jobs is the focus. – At the BPR level, process is the focus of improvement efforts. – At the BPR level, technology assumes primary importance. But all three components are always part of every quality improvement effort. The differences are only of degree, importance, and priority. • We can also make the case that each level of improvement emphasizes different categories of measures. – CPI deals, to great extent, with quality measures; – BPR with cost measures.
  • 164.
    164 Software Quality Assurance ProcessManagement Principles • The concept of process management is not new. It has been practiced on the factory floor for years. Process management became possible in manufacturing as machines began replacing manual labor, and information and communication networks became available to control the machines. Work flow was simplified, product quality was increased, cycle time was reduced, and process costs had nowhere to go but down. Today, there are entire factories with processes so well designed that no humans are needed except for emergencies, maintenance, and enhancements. And it is possible to buy a small high-quality color television for the price of a meal for two at a better restaurant. • It is interesting to note that while the typical service corporation today is patterned after the old-style factory with its manually-intensive, functionally- based division of labor concept, the new-age service corporation is being modeled after the modern industrial corporation and its automated end-to-end processes.
  • 165.
    165 Software Quality Assurance Functionsand Processes • A process is simply the largest unit referring to the flow of work through an enterprise beginning with external suppliers and ending with external customers. Along the way, value is added at each step through a series of transformations involving the consumption of resources within an established control (rule-based) framework. Processes may be decomposed into smaller units beginning with sub processes and continuing to activities, tasks, and operations. The principles are the same, regardless of the level of work transformation observed.
  • 166.
    166 Software Quality Assurance ProcessStakeholders • Customers - Customers are the recipients and users of the products and services produced by the process.. • Suppliers - Suppliers provide the inputs to a process, which consist of materials and data.. • Senior Management - Senior Management is defined as those entities outside the process that set rules, requirements, standards, constraints, and budgets on process performance. • Resource Providers - Resource providers fund process operations with facilities, equipment, machines, and labor. Everything consumed in whole or in part is considered a process resource.
  • 167.
    167 Software Quality Assurance ProcessStakeholders • Customers - Customers are the recipients and users of the products and services produced by the process.. • Suppliers - Suppliers provide the inputs to a process, which consist of materials and data.. • Senior Management - Senior Management is defined as those entities outside the process that set rules, requirements, standards, constraints, and budgets on process performance. • Resource Providers - Resource providers fund process operations with facilities, equipment, machines, and labor. Everything consumed in whole or in part is considered a process resource.
  • 168.
    168 Software Quality Assurance Barriersto Process Improvement . • Process improvement actions and programs face obstacles that must be identified, understood, and overcome. In general, the barriers fall into one of three categories: – Organizational – Cultural – Regulatory.
  • 169.
    169 Software Quality Assurance OrganizationalBarriers • These barriers are related to the hierarchical structure of the enterprise in which employees focus more on serving management than on providing quality and service to the Customer. Also included are the barriers inherent in functional rather than process management. • Senior leadership commitment and buy-in: Business process reengineering is a top-down initiative and depends upon strong leadership to meet and overcome obstacles to success. • Mismatch between authority and responsibilities: Process improvement includes the concepts of team-based performance and worker empowerment, both of which challenge traditional management and organizational wisdom.
  • 170.
    170 Software Quality Assurance OrganizationalBarriers (cont.) • Functional and technical stovepipes: The concept of process crosses established organizational boundaries and requires new methods of managing work products. • Funding: Current funding practices support the status quo and inhibit management's ability to apply scarce resources to activities that produce products and services most needed by internal and external customers.
  • 171.
    171 Software Quality Assurance CulturalBarriers • These barriers are related to work practices developed over time that militate against decentralized decision making and worker empowerment - both requisites for high performance in an information age economy. – Becoming customer-centered: Managers and employees must shift from a rule-based to a customer-based mode of operation and establish performance measures that focus on process outcomes rather than process inputs such as budget. – Aversion to job elimination, risk, and change: The very essence of process improvement is the elimination of non-value added activities, radical change, and adoption of high-technology solutions - all of which challenge the organizational status quo.
  • 172.
    172 Software Quality Assurance RegulatoryBarriers • These barriers prevent the redesign of workflow commensurate with process management, effective utilization of work teams, and innovative changes in the recognition and reward systems that promote effective teamwork. – Focus on current operations: Process improvement disrupts established procedure and challenges existing regulations and directives. Management must successfully guide employees through the transition period from existing process to reengineered process.
  • 173.
    173 Software Quality Assurance RegulatoryBarriers (cont.) – Inconsistent rules, methods, and techniques underlying management processes: Much of the non-value added activities associated with processes undergoing improvement can be traced back to obsolete or inappropriate rules, regulations, methods, and techniques whose worth must be reevaluated. – Policies on job descriptions, training, and reassignment: Process improvement calls for a radical reengineering of personnel policies that currently limit the authority of management to develop a flexible, customer-centered work force with appropriate rewards and recognition for team-centered performance and individual skill development.
  • 174.
    174 Software Quality Assurance Approachto Quality Process Improvement • The methodology for conducting process improvement projects is described in detail beginning in Section 4 of this guidebook. This is the general approach for conducting a process improvement effort: 1. Understand the business and functional requirements for the process under study with respect to mission. 2. Assess the current status of all process elements (process, people and technology) with respect to meeting requirements and enabling change. 3. Establish the baseline (AS-IS models) with respect to process, data, organization, and technology. 4. Identify and quantify stakeholder interests in the process, establish new standards of performance, and design measures and key indicators. 5. Conduct an improvement analysis program to identify potential improvement initiatives that will raise process performance to the desired level.
  • 175.
    175 Software Quality Assurance Approachto Quality Process Improvement (cont.) 6. Design a change management program that will address organizational and technical issues to align improvements in these elements with potential or planned process improvement. 7. Develop a process vision and construct TO-BE models of process, data, organization, and technology based on the vision and improvement initiatives. Identify alternative means of achieving the desired future state. 8. Perform economic and risk analysis on all alternatives, and select and document a recommended course of action. 9. Perform enterprise engineering to construct an organizational and technological platform suitable for the improved, redesigned, or reengineered process. 10. Test (prototype or pilot), implement, deploy, operate, and maintain the improved process and supporting systems. 11. Evaluate progress, update baseline models, and prepare for the next cycle of improvements.
  • 176.
    176 Software Quality Assurance BusinessEngineering Management Process
  • 177.
    177 Software Quality Assurance QualityImprovement Attributes • Quality process improvement (especially process reengineering) is a complex undertaking that demands leadership from the highest levels company and participation from virtually all executives, managers, and professional employees. 1. Total Quality Management (TQM) principles and practices ensure high- quality products and services to both internal and external customers of business processes. 2. Industrial engineering provides process measures, controls on process efficiency and effectiveness, and standardized procedures. 3. Workflow design that incorporates the concurrent management of technological and human change. 4. Process redesign incorporates innovations and eliminates non-value added time and costs from processes.
  • 178.
    178 Software Quality Assurance QualityImprovement Attributes (cont.) 5. The introduction of competitive (aggressive) information technology enables superlative customer quality and service. 6. The concept of an end-to-end process improvement model with a supporting methodology seems to be the most expeditious way to implement Davenport's principles. Such a methodology would: 7. Begin with a statement of mission, vision, objectives, goals, and strategies 8. Produce a reengineered business process to support the stated organizational mission and related strategic and business plans
  • 179.
    179 Software Quality Assurance QualityImprovement Attributes (cont.) 5. Continue through information systems design, deployment, and operations consistent with the Enterprise Model 6. Employ process management principles to ensure that process improvement gains are held 7. Focus on cultural and organizational change management issues and structural barriers to change that represent the most risk-prone component of process improvement efforts. 8. The Framework for Managing Process Improvement (Framework or F/MPI) is designed to enable process improvement efforts within the Department of Defense consistent with the established body of expertise for process improvement and best practices analysis.
  • 180.
    180 Software Quality Assurance ChangeManagement • Quality improvement inevitable requires change to an organization's structure and culture. – All change is disruptive. Therefore, business process improvement is disruptive to an organization's structure and culture. – Enterprises that have attempted process improvement while ignoring this syllogism have invariably failed. – This means that organizational change management is the most critical responsibility in an overall program of process improvement. • Organizational change management begins during the planning phase and extends through the project execution phase. Thus, dealing with organizational change is a continuous responsibility. This responsibility may be vested in one member of the improvement team, or it may be the responsibility of a separate team chartered to support all process improvement teams. The former works when only one process improvement effort is under way across a group of functional
  • 181.
    181 Software Quality Assurance ChangeManagement (cont.) • Organizational change management begins during the planning phase and extends through the project execution phase. – organizational change is a continuous responsibility. – responsibility is vested in the SQA team • SQA can serve as the focus or hub for multiple process improvement efforts. This arrangement allows: – The change management team to take a broader view of the relationship of process to organization – Assurance that organizational change management efforts are coordinated for all improvement efforts. – Technology change management, discussed in Section 7 of this guide, is also best dealt with by an independent change management team working with separate process improvement teams.
  • 182.
    182 Software Quality Assurance ChangeManagement (cont.) – Technology change management, is also best dealt with by an independent change management team working with separate process improvement teams. • The following figure also shows that change management must consider the external environment, external stakeholders, and the technical infrastructure when attempting to shape the organization's structure and culture around improved processes.
  • 183.
    183 Software Quality Assurance SoftwareQuality Assurance Change Management Team Software Quality Assurance Change Management Team Change Management Schematic
  • 184.
    184 Software Quality Assurance Thechange management team must accomplish four general objectives: 1. Understand the organizational changes that are needed as a consequence of process redesign or reengineering 2. Design the necessary structural changes needed to support the new process 3. Design a program that will begin the cultural transformation of the organization to one that is aligned with the principles behind process improvement 4. Anticipate, recognize, and resolve the barriers to change that will spring up in reaction to the change management plan.
  • 185.
    185 Software Quality Assurance •Solid change demands careful planning with inputs from all levels of the organization. Carefully timed action steps support change implementation. This requires answers to the who, what, when, where, and why. – Is the organization's leadership prepared to lead the change effort? – Have we primed the organization for planned changes? – Is the organizational structure ready to support change? – Do team members have the right mix of skills to make the changes happen? – Is success a reasonable expectation? – Are functional managers properly committed to the planned change? – Are sufficient resources available to support the change effort?
  • 186.
    186 Software Quality Assurance Barriersto change management success are well known. • They generally include the following and all must be successfully contained or resolved or the success of the process improvement cannot be reasonably assured: – Lack of active, visible, and committed leadership – Ignorance of improvement goals and employees' roles in achieving them – Ingrained satisfaction with the status quo – Vested interests in keeping things as they are – Inadequate training and preparation for the change effort
  • 187.
    187 Software Quality Assurance Barriersto change management success are well known (cont.) – Hostility directed at members of the improvement or change teams – No or little reward or recognition for participating in the change effort – Threats to personal security and career objectives – Lack of organizational support – Inadequate, infrequent, or deceiving communications about the changes
  • 188.
    188 Software Quality Assurance CourseSummary • Software Quality Assurance is an engineering science that is responsible for ensuring that all software disciplines perform there tasks with the concept of providing a product to the customer that will meet the customer’s idea of quality. • Although SQA is lead by the SQA team, the responsibility for building in quality product attributes is with all members of the product team.