MODULE 1: SOFTWARE QUALITY
ASSURANCE
THE SOFTWARE DEVELOPMENT LIFE
CYCLE (SDLC)
 It is a structured process that enables the production
of high-quality, low-cost software, in the shortest
possible production time.
 The goal of the SDLC is to produce superior software
that meets
2
3
SOFTWARE DEVELOPMENT LIFE CYCLE
DEFECT/BUG LIFE CYCLE IN SOFTWARE TESTING
 It is the specific set of states that defect or bug goes
through in its entire life.
 The purpose is to easily coordinate and communicate
current status of defect which changes to various
assignees and make the defect fixing process
systematic and efficient.
 Defect Status or Bug Status is the present state from
which the defect or a bug is currently undergoing.
 The goal is to precisely convey the current state or
progress of a defect or bug in order to better track and
understand the actual progress of the defect life cycle.
4
5
DEFECT STATES WORKFLOW
 The number of states that a defect goes through varies from project
to project
 New: When a new defect is logged and posted for the first time. It
is assigned a status as NEW.
 Assigned: Once the bug is posted by the tester, the lead of the
tester approves the bug and assigns the bug to the developer team
 Open: The developer starts analyzing and works on the defect fix
 Fixed: When a developer makes a necessary code change and
verifies the change, he or she can make bug status as “Fixed.”
 Pending retest: Once the defect is fixed the developer gives a
particular code for retesting the code to the tester. Since the
software testing remains pending from the testers end, the status
assigned is “pending retest.”
 Retest: Tester does the retesting of the code at this stage to check
whether the defect is fixed by the developer or not and changes the
status to “Re-test.”
6
WHAT IS QUALITY?
Quality – developed product meets it’s specification
Problems:
• Development organization has requirements exceeding customer's
specifications (added cost of product development)
• Certain quality characteristics can not be specified in unambiguous terms (i.e.
maintainability)
• Even if the product conforms to it’s specifications, users may not consider it
to be a quality product (because users may not be involved in the development
of the requirements)
8
WHAT IS QUALITY MANAGEMENT?
Quality Management – ensuring that required level of product quality is achieved
• Defining procedures and standards
• Applying procedures and standards to the product and process
• Checking that procedures are followed
• Collecting and analyzing various quality data
Problems:
• Intangible aspects of software quality can’t be standardized (i.e elegance
and readability)
WHY QUALITY IS IMPORTANT?
Why business should be concerned with quality:
 Quality is competitive issue now
 Quality is a must for survival
 Quality gives you the global reach
 Quality is cost effective
 Quality helps retain customers and increase profits
 Quality is the hallmarks of world-class business
10
WHAT ARE SQA, SQP, SQC, AND SQM?
SQA includes all 4 elements…
• Software Quality Assurance – establishment of network of
organizational procedures and standards leading to high-
quality software
2. Software Quality Planning – selection of appropriate
procedures and standards from this framework and adaptation
of these to specific software project
3. Software Quality Control – definition and enactment of
processes that ensure that project quality procedures and
standards are being followed by the software development
team
4. Software Quality Metrics – collecting and analyzing quality
data to predict and control quality of the software product
being developed
ELEMENTS OF SQA
1. Software Quality Assurance – establishment of network of
organizational procedures and standards leading to high-
quality software
2. Software Quality Planning – selection of appropriate procedures
and standards from this framework and adaptation of these to
specific software project
3. Software Quality Control – definition and enactment of
processes that ensure that project quality procedures and standards
are being followed by the software development team
4. Software Quality Metrics – collecting and analyzing quality data
to predict and control quality of the software product being
developed
GOALS OF SQA
1. to improve software quality by monitoring both the process
and the product
2. to ensure compliance with all local standards for SE
3. to ensure that any product defect, process variance, or
standards non- compliance is noted and fixed
SOFTWARE QUALITY ASSURANCE (SQA)
 It is a way to assure quality in the software.
 It is the set of activities which ensure processes, procedures as
well as standards are suitable for the project and implemented
correctly.
 It is a process which works parallel to development of software.
 It focuses on improving the process of development of software
so that problems can be prevented before they become a major
issue.
 Itis a kind of Umbrella activity that is applied throughout the
software process.
 Software Quality Assurance has:
1. A quality management approach
2. Formal technical reviews
3. Multi testing strategy
4. Effective software engineering technology
5. Measurement and reporting mechanism
MAJOR SOFTWARE QUALITY ASSURANCE ACTIVITIES:
1. SQA Management Plan:
Make a plan for how you will carry out the SQA through out the project.
 Think about which set of software engineering activities are the best for project. check
level of SQA team skills.
2. Set The Check Points:
SQA team should set checkpoints.
 Evaluate the performance of the project on the basis of collected data on different check
points.
3. Multi testing Strategy:
Do not depend on a single testing approach.
 When you have a lot of testing approaches available use them.
4. Measure Change Impact:
The changes for making the correction of an error sometimes re introduces more errors
keep the measure of impact of change on project.
 Reset the new change to change check the compatibility of this fix with whole project.
5. Manage Good Relations:
In the working environment managing good relations with other teams involved in the
project development is mandatory.
Bad relation of SQA team with programmers team will impact directly and badly on
project. Don’t play politics.
QUALITY IMPROVEMENT – THE WHEEL OF 6 SIGMA
Six Sigma
QUALITY IMPROVEMENT – SIX SIGMA PROCESS
• Visualize – Understand how it works now and imagine how it will work in the future
• Commit – Obtain commitment to change from the stakeholders
• Prioritize – Define priorities for incremental improvements
• Characterize – Define existing process and define the time progression for
incremental improvements
• Improve – Design and implement identified improvements
• Achieve – Realize the results of the change
BENEFITS OF SOFTWARE QUALITY ASSURANCE
1. SQA produces high quality software.
2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance
for a long time.
5. High quality commercial software increase market
share of company.
6. Improving the process of creating software.
7. Improves the quality of the software.
MEASURES OF SOFTWARE QUALITY ASSURANCE
1. Reliability –
It includes aspects such as availability, accuracy, and recoverability of system to
continue functioning under specific use over a given period of time. For example,
recoverability of system from shut-down failure is a reliability measure.
2. Performance –
It means to measure throughput of system using system response time, recovery time,
and start up time. It is a type of testing done to measure performance of system under
a heavy workload in terms of responsiveness and stability.
3. Functionality –
It represents that system is satisfying main functional requirements. It simply refers
to required and specified capabilities of a system.
4. Supportability –
There are a number of other requirements or attributes that software system must
satisfy. These include- testability, adaptability, maintainability, scalability, and so on.
These requirements generally enhance capability to support software.
5. Usability –
It is capability or degree to which a software system is easy to understand and used
by its specified users or customers to achieve specified goals with effectiveness,
efficiency, and satisfaction. It includes aesthetics, consistency, documentation, and
responsiveness.
SOFTWARE QUALITY PLAN
 Tailoring - SQP should select those organizational
standards that are appropriate to a particular product
 Standardization - SQP should use (call out) only approved
organizational process and product standards
 If new standards are required a quality improvement should
be initiated
 Elements - SQP elements are usually based on the ISO-
9001 model elements
 SQP is not written for software developers. It’s written for
SQE’s as a guide for SQC and for the customer to monitor
development activities
 Things like software production, software product plans and
risk management should be defined in SDP, IP
 Quality Factor’s shouldn’t be sacrificed to achieve efficiency.
Don’t take the job if quality process can’t be upheld
SOFTWARE QUALITY CONTROL
 Quality control is defined as
“a set of activities designed to evaluate the quality
of a developed or manufactured product”
 quality control inspection and other activities take place
before the product is shipped to the client
 prevents causes of errors, and detect and correct them
early in the development process.
 reduces the rate of products that do not qualify for
shipment.
 Quality control and quality assurance activities serve
different objectives.
METHODS OF SOFTWARE QUALITY CONTROL
SQC involves overseeing the software development process to ensure that the
procedures and STD’s are being followed
The following activities constitute SQC:
 Quality Reviews - in-process reviews of processes and products
 Reviews are the most widely used method of validating the quality of processes
and products.
 Reviews make quality everyone's responsibility.
 Quality must be built-in. SQE is responsible for writing Quality Engineering
Records (QERs) documenting their participation in these reviews.
 Tests - end-result verifications of products. These verifications are conducted
after the software has been developed.
 Test procedures are followed during conduct of these activities. SQE is
responsible for keeping the logs and some times for writing the test report.
 Quality Audits - in-process verifications of processes.
 These audits are conducted periodically (twice a month) to assess compliance to
the process STD’s.
QUALITY REVIEWS
 Peer reviews - reviews of processes and products by groups of
people.
 These reviews require pre-review preparation by all participants.
 If a participant is not prepared, then the review is not effective.
 This type of review requires participation of the SQE, moderator,
recorder, author(s), and one or more critical reviewers.
 All issues found during these reviews are documented on AR forms.
 Walkthroughs - reviews of products by groups of people mostly
without preparation.
 For example a requirements traceability review is a walkthrough.
 It involves tracing a requirement from customer requirements to the
test procedures.
 All issues found during these reviews are documented on CAR forms.
 Desk inspections - reviews of products by individuals.
 These reviews involve people reviewing products by themselves (not
in a group) and then submitting their comments to the author(s).
 The issues found during these reviews are treated in informal
manner.
TESTS
 Engineering Dry-run - test conducted by engineering without SQE.
 These tests include Unit Tests and engineering dry-runs of the formal tests.
 These engineering dry-runs are used to verify correctness and completeness of the
test procedures.
 Also, these is the final engineering verification of the end-product before sell-off to
SQE.
 All issues found during these tests are documented on STR forms.
 SQE Dry-run - test conducted by SQE.
 These tests include PQT, FAT and SAT dry-runs.
 These tests are used to verify the end-product before the formal test with the
customer.
 An SQE is sometimes responsible for writing the test report.
 However, if a separate test group is available, then SQE is relived of this obligation.
 All issues found during these tests are documented on STR forms.
 TFR - test conducted as “RFR - run-for-record” with the SQE and the
customer.
 These tests include FAT and SAT.
 These tests are conducted to sell the end-product off to the customer.
 SQE is present at all such tests.
 All issues found during these tests are documented on STR forms.
QUALITY AUDITS
 SQE Audits - audits conducted by SQE to verify that the process STD’s are
being followed.
 Examples of these audits are IPDS compliance, Configuration Control, and
Software Engineering Management.
 All findings for these audits are documented on QER forms.
 The results of the audits are distributed to the next level of management
(above project level).
 If the issue(s) are not fixed then the findings are elevated to upper
management.
 Independent Audits - audits conducted by ISO generalists or other
independent entities to verify that the process STD’s are being followed.
 These audits are usually conducted on a division/facility level.
 The results of these audits are distributed to upper management.
SOFTWARE CONFIGURATION MANAGEMENT
SCM – activities assuring that software products are properly
identified and their transition is tracked.
 In many mature organizations SCM is not part of SQA
responsibilities.
• Baseline Identification – identification of initial state of the
product
• Change Identification – identification of changes made to
the baseline
• Change Control – documentation of changes via revision
history, change summary, or using automated development tools
(ClearCase or Apex)
• Status Accounting – reporting changes to others and
monitoring completeness of the project archives
• Preservation – keeper of the software products
Software quality challenges
1. Defining it
2. Describing it (qualitatively)
3. Measuring it (quantitatively)
4. Achieving it (technically)
SOFTWARE QUALITY METRICS
27
METRICS COLLECTION
Software measurement - the process of deriving a numeric value for some
attribute of a software product or a software process.
 Comparison of these values to each other and to STD’s allows drawing
conclusions about the quality of software products or the process.
• The focus of the metrics collecting programs is usually on collecting metrics on
program defects and the V&V process.
• Metrics can be either Control Metrics or Predictor Metrics
• Most of the “Ilities” can not be measured directly unless there’s historical data.
Instead tangible software product attributes are measured and the
•“Ility” factors are derived using predefined relationships between measurable
and synthetic attributes.
• The boundary conditions for all measurements should be established in advance
and then revised once a large databank of historical data has been established
THE PROCESS OF PRODUCT MEASUREMENT
29
1. Decide what data is to be collected
2. Assess critical (core) components first
3. Measuring component characteristics might require automated tools
4. Look for consistently (unusually only works in a factory) high or low values
5. Analysis of anomalous components should reveal if the quality of product is
compromised
PREDICTOR AND CONTROL METRICS
30
Examples of Predictor Analysis:
• Code Reuse: SLOC = ELOC = Ported Code
• Nesting Depth: ND > 5 = Low Readability
• Risk Analysis: # STR P1 > 0 at SAT = Low Product Reliability
Examples of Control Analysis:
• STR aging: Old STRs = Low Productivity
• Requirements Volatility: High Volatility = Scope Creep
SOFTWARE PRODUCT METRICS
There are two categories of software product metrics:
1. Dynamic metrics – this metrics is collected by measuring
elements during program’s execution.
 This metrics help to asses efficiency and reliability of a
software product.
 The parameters collected can be easily measured (i.e.
execution time, mean time between failures)
2. Static metrics – this metrics is collected by measuring
parameters of the end products of the software development.
• This metrics help to asses the complexity, understandability,
and maintainability of a software product.
• The SLOC size and ND are the most reliable predictors of
understandability, complexity, and maintainability.
32
THE ILITIES
The specific metrics that are relevant
depend on the on the project, the
goals of the SQA, and the type of SW
that is being
developed.
DEFECT PREVENTION
Defect Prevention – establishment of practices that lower the reliance on defect detection
techniques to find majority of the bugs
• Lessons learned – learning from other peoples experiences and sharing own
experiences with the other projects
• Managing With Metrics – collecting the metrics, understanding it, and making
changes to the product or process based on analysis.
• Metrics must be standardized to be effective.
• Risk Analysis – identifying potential risks and opportunities early in the
program and tracking them to realization.
• Build freeze – no changes are made to the code during formal tests.
• Unit-level testing guidelines – test plans and procedures for each UT
• Baseline acceptance criteria – establishment of closure criteria in advance
(i.e. no P1 STRs at FAT TRR)
COST OF QUALITY (COQ)
 It is defined as a methodology that allows an organization to
determine the extent to which its resources are used for
activities that prevent poor quality, that appraise the quality of
the organization's products or services, and that result from
internal and external failures.
 Basically, the costs of software quality (COSQ) are those costs
incurred through both meeting and not meeting the customer's
quality expectations.
 In other words, there are costs associated with defects, but
producing a defect-free product or service has a cost as well
34
 The cost of poor quality affects:
 Internal and external costs resulting from failing to meet
requirements.
 The cost of good quality affects:
 Costs for investing in the prevention of non-conformance to
requirements.
 Costs for appraising a product or service for conformance to
requirements.
CATEGORIES TO MEASURE COST OF QUALITY
1. Prevention costs include cost of training developers on
writing secure and easily maintainable code
2. Detection costs include the cost of creating test cases,
setting up testing environments, revisiting testing
requirements.
3. Internal failure costs include costs incurred in fixing
defects just before delivery.
4. External failure costs include product support costs
incurred by delivering poor quality software.
37
INTERNAL FAILURE COSTS
 Examples include the costs for:
Rework
Delays
Re-designing
Shortages
Failure analysis
Re-testing
Downgrading
Downtime
Lack of flexibility and adaptability
EXTERNAL FAILURE COSTS
 Examples include the costs for:
Complaints
Repairing goods and redoing services
Warranties
Customers’ bad will
Losses due to sales reductions
Environmental costs
PREVENTION COSTS
 Examples include the costs for:
Quality planning
Supplier evaluation
New product review
Error proofing
Capability evaluations
Quality improvement team meetings
Quality improvement projects
Quality education and training
APPRAISAL COSTS
 Examples include the costs for:
Checking and testing purchased goods and services
In-process and final inspection/test
Field testing
Product, process or service audits
Calibration of measuring and test equipment
QUALITY MODEL
 Software Quality Models are a Standardized way of
measuring a software product.
 With the increasing trend in software industry, new
applications are planned and developed everyday.
 This eventually gives rise to the need for reassuring that
the product so built meets at least the expected standards
42
TYPES OF SOFTWARE QUALITY MODELS
1. McCall Model
Developed by the Rome air development center (RADC), the US air-force
electronic system decision (ESD), general electric, in order to improve the
quality of software products at software development companies.
 The model was developed to assess the relationships between external
factors and product quality criteria.
 The quality characteristics were classified in three major types, eleven
such factors which describe the external view of the software (user view),
23 quality criteria which describe the internal view of the software
(developer view), and the metrics which define and are used to provide a
scale and method for measurement.
 The total number of factors was reduced to eleven in order to simplify it.
Those factors are Correctness, Integrity, Reliability, Efficiency, Usability,
Flexibility, Maintainability, Reusability, Testability, Portability, and
Interoperability.
 The major contribution of this model is the relationship between the
quality characteristics and metrics. But, this model does not consider
directly on the functionality of software products.
 classifies all software requirements into 11 software
quality factors.
 The 11 factors are grouped into three categories –
product operation, product revision and
product transition – as follows:
 Product Operation Factors
 How well it runs….
 Correctness, reliability, efficiency, integrity, and usability
 Product Revision Factors
 How well it can be changed, tested, and redeployed.
 Maintainability; flexibility; testability
 Product Transition Factors
 How well it can be moved to different platforms and interface
with other systems
 Portability; Reusability; Interoperability
Software quality factors
Product operation factors
Product revision factors
Product transition factors
McCall's software quality factors model
45
2. Boehm Model
 Boehm added new factors to McCall’s model with emphasis
on the maintainability of software product at software
development companies.
 The main aim of this model is to address the contemporary
shortcomings of models that automatically and
quantitatively evaluate the quality of software.
 Boehm model represents the characteristics of the software
product hierarchically in order to get contribute in the total
quality.
 Also, the software product evaluation considered with
respect to the utility of the program.
 But, this model contains only a diagram without any
suggestion about measuring the quality characteristics.
3. FURPS Model
 FURPS model was proposed by and Hewlett-Packard Co and
Robert Grady.
 The attributes were classified into two main categories according
to the user’s requirements, the functional and non-functional
requirements.
 Functional requirements (F): Defined by input and expected
output.
 Non-functional requirements (URPS):
 Usability, reliability, performance, supportability. Also, this model
was extended by IBM Rational Software – into FURPS+.
 Thus, this model considered only the user’s requirements and
disregards the developer consideration.
 But, this model fails to take into account the software some of the
product characteristics, like maintainability and portability.
4.Dromey Model
 Dromey (1995) states that the evaluation is different for
each product, thus a dynamic idea for process modeling is
required.
 Thus, the main idea of the proposed model was to obtain a
model broad enough to work for different systems.
 Also the model seeks to increase understanding of the
relationship between the attributes (characteristics) and the
sub-attributes (sub-characteristics) of quality.
 Also this model defined two layers, the high-level attributes
and subordinate attributes.
 Therefore, this model suffers from lack of criteria for
measurement of software quality.
5. ISO IEC 9126 Model
 As, many software quality models were proposed, the
confusion happened and new standard model was required.
 Thus, ISO/IEC JTC1 began to develop the required
consensus and encourage standardization world-wide.
 The ISO 9126 is part of the ISO 9000 standard, and it is the
most important standard for quality assurance.
 The first considerations originated in 1978, and in the year
1985 the development of ISO/IEC 9126 was started.
 In this model, for software development companies, the
totality of software product quality attributes was classified
in a hierarchical tree structure of characteristics and sub
characteristics.
 And the highest level of this structure consists of the quality
characteristics and the lowest level consists of the software
quality criteria.
 This model specified six characteristics including
Functionality, the Reliability, Usability, Efficiency,
Maintainability and the Portability; all of which are further
divided into 21 sub characteristics.
 All these sub characteristics are manifested externally when
the software is used as part of a computer system, and thus are
the result of internal software attributes.
 All the defined characteristics are applicable to every kind of
software, including computer programs and data contained in
firmware and provide consistent terminology for software
product quality.
 And they also provide a framework for making trade-offs
between software product capabilities.

Software Quality Assurance- Introduction

  • 1.
    MODULE 1: SOFTWAREQUALITY ASSURANCE
  • 2.
    THE SOFTWARE DEVELOPMENTLIFE CYCLE (SDLC)  It is a structured process that enables the production of high-quality, low-cost software, in the shortest possible production time.  The goal of the SDLC is to produce superior software that meets 2
  • 3.
  • 4.
    DEFECT/BUG LIFE CYCLEIN SOFTWARE TESTING  It is the specific set of states that defect or bug goes through in its entire life.  The purpose is to easily coordinate and communicate current status of defect which changes to various assignees and make the defect fixing process systematic and efficient.  Defect Status or Bug Status is the present state from which the defect or a bug is currently undergoing.  The goal is to precisely convey the current state or progress of a defect or bug in order to better track and understand the actual progress of the defect life cycle. 4
  • 5.
  • 6.
    DEFECT STATES WORKFLOW The number of states that a defect goes through varies from project to project  New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.  Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to the developer team  Open: The developer starts analyzing and works on the defect fix  Fixed: When a developer makes a necessary code change and verifies the change, he or she can make bug status as “Fixed.”  Pending retest: Once the defect is fixed the developer gives a particular code for retesting the code to the tester. Since the software testing remains pending from the testers end, the status assigned is “pending retest.”  Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed by the developer or not and changes the status to “Re-test.” 6
  • 7.
    WHAT IS QUALITY? Quality– developed product meets it’s specification Problems: • Development organization has requirements exceeding customer's specifications (added cost of product development) • Certain quality characteristics can not be specified in unambiguous terms (i.e. maintainability) • Even if the product conforms to it’s specifications, users may not consider it to be a quality product (because users may not be involved in the development of the requirements)
  • 8.
    8 WHAT IS QUALITYMANAGEMENT? Quality Management – ensuring that required level of product quality is achieved • Defining procedures and standards • Applying procedures and standards to the product and process • Checking that procedures are followed • Collecting and analyzing various quality data Problems: • Intangible aspects of software quality can’t be standardized (i.e elegance and readability)
  • 9.
    WHY QUALITY ISIMPORTANT? Why business should be concerned with quality:  Quality is competitive issue now  Quality is a must for survival  Quality gives you the global reach  Quality is cost effective  Quality helps retain customers and increase profits  Quality is the hallmarks of world-class business
  • 10.
    10 WHAT ARE SQA,SQP, SQC, AND SQM? SQA includes all 4 elements… • Software Quality Assurance – establishment of network of organizational procedures and standards leading to high- quality software 2. Software Quality Planning – selection of appropriate procedures and standards from this framework and adaptation of these to specific software project 3. Software Quality Control – definition and enactment of processes that ensure that project quality procedures and standards are being followed by the software development team 4. Software Quality Metrics – collecting and analyzing quality data to predict and control quality of the software product being developed
  • 11.
    ELEMENTS OF SQA 1.Software Quality Assurance – establishment of network of organizational procedures and standards leading to high- quality software 2. Software Quality Planning – selection of appropriate procedures and standards from this framework and adaptation of these to specific software project 3. Software Quality Control – definition and enactment of processes that ensure that project quality procedures and standards are being followed by the software development team 4. Software Quality Metrics – collecting and analyzing quality data to predict and control quality of the software product being developed
  • 12.
    GOALS OF SQA 1.to improve software quality by monitoring both the process and the product 2. to ensure compliance with all local standards for SE 3. to ensure that any product defect, process variance, or standards non- compliance is noted and fixed
  • 13.
    SOFTWARE QUALITY ASSURANCE(SQA)  It is a way to assure quality in the software.  It is the set of activities which ensure processes, procedures as well as standards are suitable for the project and implemented correctly.  It is a process which works parallel to development of software.  It focuses on improving the process of development of software so that problems can be prevented before they become a major issue.  Itis a kind of Umbrella activity that is applied throughout the software process.  Software Quality Assurance has: 1. A quality management approach 2. Formal technical reviews 3. Multi testing strategy 4. Effective software engineering technology 5. Measurement and reporting mechanism
  • 14.
    MAJOR SOFTWARE QUALITYASSURANCE ACTIVITIES: 1. SQA Management Plan: Make a plan for how you will carry out the SQA through out the project.  Think about which set of software engineering activities are the best for project. check level of SQA team skills. 2. Set The Check Points: SQA team should set checkpoints.  Evaluate the performance of the project on the basis of collected data on different check points. 3. Multi testing Strategy: Do not depend on a single testing approach.  When you have a lot of testing approaches available use them. 4. Measure Change Impact: The changes for making the correction of an error sometimes re introduces more errors keep the measure of impact of change on project.  Reset the new change to change check the compatibility of this fix with whole project. 5. Manage Good Relations: In the working environment managing good relations with other teams involved in the project development is mandatory. Bad relation of SQA team with programmers team will impact directly and badly on project. Don’t play politics.
  • 15.
    QUALITY IMPROVEMENT –THE WHEEL OF 6 SIGMA Six Sigma
  • 16.
    QUALITY IMPROVEMENT –SIX SIGMA PROCESS • Visualize – Understand how it works now and imagine how it will work in the future • Commit – Obtain commitment to change from the stakeholders • Prioritize – Define priorities for incremental improvements • Characterize – Define existing process and define the time progression for incremental improvements • Improve – Design and implement identified improvements • Achieve – Realize the results of the change
  • 17.
    BENEFITS OF SOFTWAREQUALITY ASSURANCE 1. SQA produces high quality software. 2. High quality application saves time and cost. 3. SQA is beneficial for better reliability. 4. SQA is beneficial in the condition of no maintenance for a long time. 5. High quality commercial software increase market share of company. 6. Improving the process of creating software. 7. Improves the quality of the software.
  • 18.
    MEASURES OF SOFTWAREQUALITY ASSURANCE 1. Reliability – It includes aspects such as availability, accuracy, and recoverability of system to continue functioning under specific use over a given period of time. For example, recoverability of system from shut-down failure is a reliability measure. 2. Performance – It means to measure throughput of system using system response time, recovery time, and start up time. It is a type of testing done to measure performance of system under a heavy workload in terms of responsiveness and stability. 3. Functionality – It represents that system is satisfying main functional requirements. It simply refers to required and specified capabilities of a system. 4. Supportability – There are a number of other requirements or attributes that software system must satisfy. These include- testability, adaptability, maintainability, scalability, and so on. These requirements generally enhance capability to support software. 5. Usability – It is capability or degree to which a software system is easy to understand and used by its specified users or customers to achieve specified goals with effectiveness, efficiency, and satisfaction. It includes aesthetics, consistency, documentation, and responsiveness.
  • 19.
    SOFTWARE QUALITY PLAN Tailoring - SQP should select those organizational standards that are appropriate to a particular product  Standardization - SQP should use (call out) only approved organizational process and product standards  If new standards are required a quality improvement should be initiated  Elements - SQP elements are usually based on the ISO- 9001 model elements  SQP is not written for software developers. It’s written for SQE’s as a guide for SQC and for the customer to monitor development activities  Things like software production, software product plans and risk management should be defined in SDP, IP  Quality Factor’s shouldn’t be sacrificed to achieve efficiency. Don’t take the job if quality process can’t be upheld
  • 20.
    SOFTWARE QUALITY CONTROL Quality control is defined as “a set of activities designed to evaluate the quality of a developed or manufactured product”  quality control inspection and other activities take place before the product is shipped to the client  prevents causes of errors, and detect and correct them early in the development process.  reduces the rate of products that do not qualify for shipment.  Quality control and quality assurance activities serve different objectives.
  • 21.
    METHODS OF SOFTWAREQUALITY CONTROL SQC involves overseeing the software development process to ensure that the procedures and STD’s are being followed The following activities constitute SQC:  Quality Reviews - in-process reviews of processes and products  Reviews are the most widely used method of validating the quality of processes and products.  Reviews make quality everyone's responsibility.  Quality must be built-in. SQE is responsible for writing Quality Engineering Records (QERs) documenting their participation in these reviews.  Tests - end-result verifications of products. These verifications are conducted after the software has been developed.  Test procedures are followed during conduct of these activities. SQE is responsible for keeping the logs and some times for writing the test report.  Quality Audits - in-process verifications of processes.  These audits are conducted periodically (twice a month) to assess compliance to the process STD’s.
  • 22.
    QUALITY REVIEWS  Peerreviews - reviews of processes and products by groups of people.  These reviews require pre-review preparation by all participants.  If a participant is not prepared, then the review is not effective.  This type of review requires participation of the SQE, moderator, recorder, author(s), and one or more critical reviewers.  All issues found during these reviews are documented on AR forms.  Walkthroughs - reviews of products by groups of people mostly without preparation.  For example a requirements traceability review is a walkthrough.  It involves tracing a requirement from customer requirements to the test procedures.  All issues found during these reviews are documented on CAR forms.  Desk inspections - reviews of products by individuals.  These reviews involve people reviewing products by themselves (not in a group) and then submitting their comments to the author(s).  The issues found during these reviews are treated in informal manner.
  • 23.
    TESTS  Engineering Dry-run- test conducted by engineering without SQE.  These tests include Unit Tests and engineering dry-runs of the formal tests.  These engineering dry-runs are used to verify correctness and completeness of the test procedures.  Also, these is the final engineering verification of the end-product before sell-off to SQE.  All issues found during these tests are documented on STR forms.  SQE Dry-run - test conducted by SQE.  These tests include PQT, FAT and SAT dry-runs.  These tests are used to verify the end-product before the formal test with the customer.  An SQE is sometimes responsible for writing the test report.  However, if a separate test group is available, then SQE is relived of this obligation.  All issues found during these tests are documented on STR forms.  TFR - test conducted as “RFR - run-for-record” with the SQE and the customer.  These tests include FAT and SAT.  These tests are conducted to sell the end-product off to the customer.  SQE is present at all such tests.  All issues found during these tests are documented on STR forms.
  • 24.
    QUALITY AUDITS  SQEAudits - audits conducted by SQE to verify that the process STD’s are being followed.  Examples of these audits are IPDS compliance, Configuration Control, and Software Engineering Management.  All findings for these audits are documented on QER forms.  The results of the audits are distributed to the next level of management (above project level).  If the issue(s) are not fixed then the findings are elevated to upper management.  Independent Audits - audits conducted by ISO generalists or other independent entities to verify that the process STD’s are being followed.  These audits are usually conducted on a division/facility level.  The results of these audits are distributed to upper management.
  • 25.
    SOFTWARE CONFIGURATION MANAGEMENT SCM– activities assuring that software products are properly identified and their transition is tracked.  In many mature organizations SCM is not part of SQA responsibilities. • Baseline Identification – identification of initial state of the product • Change Identification – identification of changes made to the baseline • Change Control – documentation of changes via revision history, change summary, or using automated development tools (ClearCase or Apex) • Status Accounting – reporting changes to others and monitoring completeness of the project archives • Preservation – keeper of the software products
  • 26.
    Software quality challenges 1.Defining it 2. Describing it (qualitatively) 3. Measuring it (quantitatively) 4. Achieving it (technically)
  • 27.
  • 28.
    METRICS COLLECTION Software measurement- the process of deriving a numeric value for some attribute of a software product or a software process.  Comparison of these values to each other and to STD’s allows drawing conclusions about the quality of software products or the process. • The focus of the metrics collecting programs is usually on collecting metrics on program defects and the V&V process. • Metrics can be either Control Metrics or Predictor Metrics • Most of the “Ilities” can not be measured directly unless there’s historical data. Instead tangible software product attributes are measured and the •“Ility” factors are derived using predefined relationships between measurable and synthetic attributes. • The boundary conditions for all measurements should be established in advance and then revised once a large databank of historical data has been established
  • 29.
    THE PROCESS OFPRODUCT MEASUREMENT 29 1. Decide what data is to be collected 2. Assess critical (core) components first 3. Measuring component characteristics might require automated tools 4. Look for consistently (unusually only works in a factory) high or low values 5. Analysis of anomalous components should reveal if the quality of product is compromised
  • 30.
    PREDICTOR AND CONTROLMETRICS 30 Examples of Predictor Analysis: • Code Reuse: SLOC = ELOC = Ported Code • Nesting Depth: ND > 5 = Low Readability • Risk Analysis: # STR P1 > 0 at SAT = Low Product Reliability Examples of Control Analysis: • STR aging: Old STRs = Low Productivity • Requirements Volatility: High Volatility = Scope Creep
  • 31.
    SOFTWARE PRODUCT METRICS Thereare two categories of software product metrics: 1. Dynamic metrics – this metrics is collected by measuring elements during program’s execution.  This metrics help to asses efficiency and reliability of a software product.  The parameters collected can be easily measured (i.e. execution time, mean time between failures) 2. Static metrics – this metrics is collected by measuring parameters of the end products of the software development. • This metrics help to asses the complexity, understandability, and maintainability of a software product. • The SLOC size and ND are the most reliable predictors of understandability, complexity, and maintainability.
  • 32.
    32 THE ILITIES The specificmetrics that are relevant depend on the on the project, the goals of the SQA, and the type of SW that is being developed.
  • 33.
    DEFECT PREVENTION Defect Prevention– establishment of practices that lower the reliance on defect detection techniques to find majority of the bugs • Lessons learned – learning from other peoples experiences and sharing own experiences with the other projects • Managing With Metrics – collecting the metrics, understanding it, and making changes to the product or process based on analysis. • Metrics must be standardized to be effective. • Risk Analysis – identifying potential risks and opportunities early in the program and tracking them to realization. • Build freeze – no changes are made to the code during formal tests. • Unit-level testing guidelines – test plans and procedures for each UT • Baseline acceptance criteria – establishment of closure criteria in advance (i.e. no P1 STRs at FAT TRR)
  • 34.
    COST OF QUALITY(COQ)  It is defined as a methodology that allows an organization to determine the extent to which its resources are used for activities that prevent poor quality, that appraise the quality of the organization's products or services, and that result from internal and external failures.  Basically, the costs of software quality (COSQ) are those costs incurred through both meeting and not meeting the customer's quality expectations.  In other words, there are costs associated with defects, but producing a defect-free product or service has a cost as well 34
  • 35.
     The costof poor quality affects:  Internal and external costs resulting from failing to meet requirements.  The cost of good quality affects:  Costs for investing in the prevention of non-conformance to requirements.  Costs for appraising a product or service for conformance to requirements.
  • 37.
    CATEGORIES TO MEASURECOST OF QUALITY 1. Prevention costs include cost of training developers on writing secure and easily maintainable code 2. Detection costs include the cost of creating test cases, setting up testing environments, revisiting testing requirements. 3. Internal failure costs include costs incurred in fixing defects just before delivery. 4. External failure costs include product support costs incurred by delivering poor quality software. 37
  • 38.
    INTERNAL FAILURE COSTS Examples include the costs for: Rework Delays Re-designing Shortages Failure analysis Re-testing Downgrading Downtime Lack of flexibility and adaptability
  • 39.
    EXTERNAL FAILURE COSTS Examples include the costs for: Complaints Repairing goods and redoing services Warranties Customers’ bad will Losses due to sales reductions Environmental costs
  • 40.
    PREVENTION COSTS  Examplesinclude the costs for: Quality planning Supplier evaluation New product review Error proofing Capability evaluations Quality improvement team meetings Quality improvement projects Quality education and training
  • 41.
    APPRAISAL COSTS  Examplesinclude the costs for: Checking and testing purchased goods and services In-process and final inspection/test Field testing Product, process or service audits Calibration of measuring and test equipment
  • 42.
    QUALITY MODEL  SoftwareQuality Models are a Standardized way of measuring a software product.  With the increasing trend in software industry, new applications are planned and developed everyday.  This eventually gives rise to the need for reassuring that the product so built meets at least the expected standards 42
  • 43.
    TYPES OF SOFTWAREQUALITY MODELS 1. McCall Model Developed by the Rome air development center (RADC), the US air-force electronic system decision (ESD), general electric, in order to improve the quality of software products at software development companies.  The model was developed to assess the relationships between external factors and product quality criteria.  The quality characteristics were classified in three major types, eleven such factors which describe the external view of the software (user view), 23 quality criteria which describe the internal view of the software (developer view), and the metrics which define and are used to provide a scale and method for measurement.  The total number of factors was reduced to eleven in order to simplify it. Those factors are Correctness, Integrity, Reliability, Efficiency, Usability, Flexibility, Maintainability, Reusability, Testability, Portability, and Interoperability.  The major contribution of this model is the relationship between the quality characteristics and metrics. But, this model does not consider directly on the functionality of software products.
  • 44.
     classifies allsoftware requirements into 11 software quality factors.  The 11 factors are grouped into three categories – product operation, product revision and product transition – as follows:  Product Operation Factors  How well it runs….  Correctness, reliability, efficiency, integrity, and usability  Product Revision Factors  How well it can be changed, tested, and redeployed.  Maintainability; flexibility; testability  Product Transition Factors  How well it can be moved to different platforms and interface with other systems  Portability; Reusability; Interoperability
  • 45.
    Software quality factors Productoperation factors Product revision factors Product transition factors McCall's software quality factors model 45
  • 46.
    2. Boehm Model Boehm added new factors to McCall’s model with emphasis on the maintainability of software product at software development companies.  The main aim of this model is to address the contemporary shortcomings of models that automatically and quantitatively evaluate the quality of software.  Boehm model represents the characteristics of the software product hierarchically in order to get contribute in the total quality.  Also, the software product evaluation considered with respect to the utility of the program.  But, this model contains only a diagram without any suggestion about measuring the quality characteristics.
  • 47.
    3. FURPS Model FURPS model was proposed by and Hewlett-Packard Co and Robert Grady.  The attributes were classified into two main categories according to the user’s requirements, the functional and non-functional requirements.  Functional requirements (F): Defined by input and expected output.  Non-functional requirements (URPS):  Usability, reliability, performance, supportability. Also, this model was extended by IBM Rational Software – into FURPS+.  Thus, this model considered only the user’s requirements and disregards the developer consideration.  But, this model fails to take into account the software some of the product characteristics, like maintainability and portability.
  • 48.
    4.Dromey Model  Dromey(1995) states that the evaluation is different for each product, thus a dynamic idea for process modeling is required.  Thus, the main idea of the proposed model was to obtain a model broad enough to work for different systems.  Also the model seeks to increase understanding of the relationship between the attributes (characteristics) and the sub-attributes (sub-characteristics) of quality.  Also this model defined two layers, the high-level attributes and subordinate attributes.  Therefore, this model suffers from lack of criteria for measurement of software quality.
  • 49.
    5. ISO IEC9126 Model  As, many software quality models were proposed, the confusion happened and new standard model was required.  Thus, ISO/IEC JTC1 began to develop the required consensus and encourage standardization world-wide.  The ISO 9126 is part of the ISO 9000 standard, and it is the most important standard for quality assurance.  The first considerations originated in 1978, and in the year 1985 the development of ISO/IEC 9126 was started.  In this model, for software development companies, the totality of software product quality attributes was classified in a hierarchical tree structure of characteristics and sub characteristics.  And the highest level of this structure consists of the quality characteristics and the lowest level consists of the software quality criteria.
  • 50.
     This modelspecified six characteristics including Functionality, the Reliability, Usability, Efficiency, Maintainability and the Portability; all of which are further divided into 21 sub characteristics.  All these sub characteristics are manifested externally when the software is used as part of a computer system, and thus are the result of internal software attributes.  All the defined characteristics are applicable to every kind of software, including computer programs and data contained in firmware and provide consistent terminology for software product quality.  And they also provide a framework for making trade-offs between software product capabilities.