THE SOFTWARE QUALITY
DILEMMA
1. “Good Enough” Software: software
companies create software with known
bugs and deliver it to a broad
population of end users.
2. The Cost of Quality: The cost of
quality includes all costs incurred in the
pursuit of quality or in performing
quality-related activities and the
downstream costs of lack of quality.
3. Risks: low-quality software increases
risks for both the developer and the end
user.
4. Negligence and Liability: In most
cases, the customer claims that the
developer has been negligent. The
developer claims that the customer has
repeatedly changed its requirements.
5. Quality and Security: software that
does not exhibit high quality is easier to
hack, and as a consequence, low-quality
software can indirectly increase the
security risk with all of its attendant costs
6. The Impact of Management Actions:
Software quality is often influenced as
much by management decisions as it is
by technology decisions.
 Estimation decisions
 Scheduling decisions
 Risk-oriented decisions
ACHIEVING SOFTWARE
QUALITY
Management and practice are applied
within the context of four broad
activities that help a software team
achieve high software quality.
 Software Engineering Methods
 Project Management Techniques
 Quality Control
 Quality Assurance
A defect amplification model can be used
to illustrate the generation and
detection of errors during the design
and code generation actions of a
software process.
REVIEWS
Technical reviews should be applied with a level of
formality that is appropriate for the product to be
built, the project time line, and the people who are
doing the work.
INFORMAL REVIEWS
Informal reviews include a simple desk check of a
software engineering work product with a
colleague, a casual meeting (involving more than
two people) for the purpose of reviewing a work
product, or the review-oriented aspects of pair
programming.
FORMAL TECHNICAL
REVIEWS
A formal technical review (FTR) is a software
quality control activity performed by software
engineers.
The objectives of an FTR are:
(1) to uncover errors in function, logic, or
implementation for any representation of the
software;
(2) to verify that the software under review meets
its requirements;
(3) to ensure that the software has been
represented according to predefined
standards;
(4) to achieve software that is developed in a
uniform manner; and
 Standards: The job of SQA is to ensure that
standards that have been adopted are
followed and that all work products conform
to them.
 Reviews and audits: Technical reviews are a
quality control activity performed by software
engineers for software engineers. Audits are a
type of review performed by SQA personnel
with the intent of ensuring that quality
guidelines are being followed for software
engineering work.
 Testing. Software testing is a quality control
function that has one primary goal—to find
errors.
 Error/defect collection and analysis: SQA
collects and analyzes error and defect data to
better understand how errors are introduced
and what software engineering activities are
best suited to eliminating them.
 Change management: SQA ensures that
adequate change management practices have
been instituted.
 Education: . A key contributor to
improvement is education of software
engineers, their managers, and other
stakeholders.
 Vendor management: The job of the SQA
organization is to ensure that high-quality
software results by suggesting specific quality
practices that the vendor should follow and
incorporating quality mandates as part of any
contract with an external vendor.
 Security management: . SQA ensures that
appropriate process and technology are used to
achieve software security.
 Safety: SQA may be responsible for assessing the
impact of software failure and for initiating those
steps required to reduce risk.
 Risk management: SQA organization ensures that
risk management activities are properly
conducted and that risk-related contingency
plans have been established.
 Prepares an SQA plan for a project
 Participates in the development of the project’s
software process description
 Reviews software engineering activities to verify
compliance with the defined software process
 Audits designated software work products to
verify compliance with those defined as part of
the software process
 Ensures that deviations in software work and
work products are documented and handled
according to a documented procedure
 Records any noncompliance and reports to
senior management
Requirements quality
Design quality
Code quality
Quality control effectiveness
SOFTWARE RELIABILITY
Software reliability is defined in statistical
terms as “the probability of failure-free
operation of a computer program in a
specified environment for a specified
time”.
“A strategy for software testing provides a road
map that describes the steps to be conducted
as part of testing, when these steps are
planned and then undertaken, and how much
effort, time, and resources will be required.”
 Verification refers to the set of tasks that
ensure that software correctly implements a
specific function.
 Validation refers to a different set of tasks
that ensure that the software that has been
built is traceable to customer requirements.
 The software developer is always responsible
for testing the individual units (components)
of the program, ensuring that each performs
the function or exhibits the behavior for
which it was designed.
 In many cases, the developer also conducts
integration testing—a testing step that leads
to the construction (and test) of the complete
software architecture.
VALIDATION TESTING
Validation testing begins at the
culmination of integration testing,
when individual components have been
exercised, the software is completely
assembled as a package, and interfacing
errors have been uncovered and
corrected.
1. Validation-Test Criteria
 A test plan outlines the classes of tests to be
conducted, and a test procedure defines specific test
cases that are designed to ensure that all functional
requirements are satisfied, all behavioral
characteristics are achieved, all content is accurate
and properly presented, all performance
requirements are attained, documentation is correct,
and usability and other requirements are met.
2. Configuration Review
 The intent of the review is to ensure
that all elements of the software
configuration have been properly
developed, are cataloged, and have the
necessary detail.
3. Alpha and Beta Testing
 The alpha test records errors and usage problems. Alpha
tests are conducted in a controlled environment. The
alpha test is conducted at the developer’s site by a
representative group of end users.
 The beta test is conducted at one or more end-user sites.
The beta test is a “live” application of the software in an
environment that cannot be controlled by the developer.
The customer records all problems (real or imagined)
that are encountered during beta testing and reports
these to the developer at regular intervals.

software quality

  • 1.
    THE SOFTWARE QUALITY DILEMMA 1.“Good Enough” Software: software companies create software with known bugs and deliver it to a broad population of end users. 2. The Cost of Quality: The cost of quality includes all costs incurred in the pursuit of quality or in performing quality-related activities and the downstream costs of lack of quality.
  • 2.
    3. Risks: low-qualitysoftware increases risks for both the developer and the end user. 4. Negligence and Liability: In most cases, the customer claims that the developer has been negligent. The developer claims that the customer has repeatedly changed its requirements. 5. Quality and Security: software that does not exhibit high quality is easier to hack, and as a consequence, low-quality software can indirectly increase the security risk with all of its attendant costs
  • 3.
    6. The Impactof Management Actions: Software quality is often influenced as much by management decisions as it is by technology decisions.  Estimation decisions  Scheduling decisions  Risk-oriented decisions
  • 4.
    ACHIEVING SOFTWARE QUALITY Management andpractice are applied within the context of four broad activities that help a software team achieve high software quality.  Software Engineering Methods  Project Management Techniques  Quality Control  Quality Assurance
  • 5.
    A defect amplificationmodel can be used to illustrate the generation and detection of errors during the design and code generation actions of a software process.
  • 6.
    REVIEWS Technical reviews shouldbe applied with a level of formality that is appropriate for the product to be built, the project time line, and the people who are doing the work. INFORMAL REVIEWS Informal reviews include a simple desk check of a software engineering work product with a colleague, a casual meeting (involving more than two people) for the purpose of reviewing a work product, or the review-oriented aspects of pair programming.
  • 7.
    FORMAL TECHNICAL REVIEWS A formaltechnical review (FTR) is a software quality control activity performed by software engineers. The objectives of an FTR are: (1) to uncover errors in function, logic, or implementation for any representation of the software; (2) to verify that the software under review meets its requirements; (3) to ensure that the software has been represented according to predefined standards; (4) to achieve software that is developed in a uniform manner; and
  • 8.
     Standards: Thejob of SQA is to ensure that standards that have been adopted are followed and that all work products conform to them.  Reviews and audits: Technical reviews are a quality control activity performed by software engineers for software engineers. Audits are a type of review performed by SQA personnel with the intent of ensuring that quality guidelines are being followed for software engineering work.
  • 9.
     Testing. Softwaretesting is a quality control function that has one primary goal—to find errors.  Error/defect collection and analysis: SQA collects and analyzes error and defect data to better understand how errors are introduced and what software engineering activities are best suited to eliminating them.  Change management: SQA ensures that adequate change management practices have been instituted.  Education: . A key contributor to improvement is education of software engineers, their managers, and other stakeholders.
  • 10.
     Vendor management:The job of the SQA organization is to ensure that high-quality software results by suggesting specific quality practices that the vendor should follow and incorporating quality mandates as part of any contract with an external vendor.  Security management: . SQA ensures that appropriate process and technology are used to achieve software security.  Safety: SQA may be responsible for assessing the impact of software failure and for initiating those steps required to reduce risk.  Risk management: SQA organization ensures that risk management activities are properly conducted and that risk-related contingency plans have been established.
  • 11.
     Prepares anSQA plan for a project  Participates in the development of the project’s software process description  Reviews software engineering activities to verify compliance with the defined software process  Audits designated software work products to verify compliance with those defined as part of the software process  Ensures that deviations in software work and work products are documented and handled according to a documented procedure  Records any noncompliance and reports to senior management
  • 12.
    Requirements quality Design quality Codequality Quality control effectiveness
  • 13.
    SOFTWARE RELIABILITY Software reliabilityis defined in statistical terms as “the probability of failure-free operation of a computer program in a specified environment for a specified time”.
  • 14.
    “A strategy forsoftware testing provides a road map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time, and resources will be required.”
  • 15.
     Verification refersto the set of tasks that ensure that software correctly implements a specific function.  Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements.
  • 16.
     The softwaredeveloper is always responsible for testing the individual units (components) of the program, ensuring that each performs the function or exhibits the behavior for which it was designed.  In many cases, the developer also conducts integration testing—a testing step that leads to the construction (and test) of the complete software architecture.
  • 19.
    VALIDATION TESTING Validation testingbegins at the culmination of integration testing, when individual components have been exercised, the software is completely assembled as a package, and interfacing errors have been uncovered and corrected.
  • 20.
    1. Validation-Test Criteria A test plan outlines the classes of tests to be conducted, and a test procedure defines specific test cases that are designed to ensure that all functional requirements are satisfied, all behavioral characteristics are achieved, all content is accurate and properly presented, all performance requirements are attained, documentation is correct, and usability and other requirements are met.
  • 21.
    2. Configuration Review The intent of the review is to ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary detail.
  • 22.
    3. Alpha andBeta Testing  The alpha test records errors and usage problems. Alpha tests are conducted in a controlled environment. The alpha test is conducted at the developer’s site by a representative group of end users.  The beta test is conducted at one or more end-user sites. The beta test is a “live” application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals.