SlideShare a Scribd company logo
1
Software Engineering
KCS-601, Unit-IV
Dr APJ Abdul Kalam Technical
University, Lucknow
By
Dr Anuranjan Misra
Dr Anuranjan Misra
innovation
Ambassador
Ministry of Education,
Government of India
& Professor & Dean,
GNIOT, Greater
Noida
Testing Objectives
• Testing is the process of executing a program
with the intent of finding errors.
• A good test case is one with a high probability of
finding an as-yet undiscovered error.
finding an as-yet undiscovered error.
• A successful test is one that discovers an as-yet-
undiscovered error.
Testing Principles
• All tests should be traceable to customer
requirements.
• Tests should be planned before testing begins.
• 80% of all errors are in 20% of the code.
• Testing should begin in the small and progress to
the large.
• Exhaustive testing is not possible for real
programs due to combinatorial explosion of
possible test cases
• Testing should be conducted by an independent
third party if possible.
Debugging vs. Testing
• Debugging is the process of finding errors in a
program under development that is not thought to
be correct.
• Testing is the process of attempting to find errors
in a program that is thought to be correct.
in a program that is thought to be correct.
• Testing attempts to establish that a program
satisfies its specifications.
• Testing can establish the presence of errors
but cannot guarantee for their absence.
• Amount of testing performed must be balanced
against the cost of undiscovered errors.
Verification and Validation
• The distinction between the two terms is largely
to do with the role of specifications.
• Validation is the process of checking whether the
specification captures the customer's needs. That
is, Specifications are as per customer
is, Specifications are as per customer
requirements/needs.
• Whereas, verification is the process of checking
that the software meets the specification. That is,
software is as per specifications.
• Testing = Validation + Verification
SOFTWARE TESTING FUNDAMENTALS
What is Testing?
• Testing is a process of exercising software to verify that it
satisfies specified requirements and to detect errors.
• Software testing is a process of executing a program or
• Software testing is a process of executing a program or
application with the intent of finding the software bugs.
• It can also be stated as the process of validating and
verifying that a software program or application or
product:
1. Meets the business and technical requirements
that guided it’s design and development
2. Works as expected
3. Can be implemented with the same characteristic.
Software Testability is simply how easily a computer
program can be tested. The following characteristics lead
to testable software.
• Operability
• Observability
• Observability
• Controllability
• Decomposability
• Simplicity
• Stability
• Understandability
Why is Software Testing necessary?
Software testing is very important because of the following
reasons:
1. Software testing is really required to point out the defects
and errors that were made during the development phases.
2. It’s essential since it makes sure of the Customer’s
reliability and their satisfaction in the application.
3. It is very important to ensure the Quality of the product.
Quality product delivered to the customers helps in gaining
their confidence.
4. Testing is required for an effective performance of
software application or product.
What is an error in software testing?
• The mistake made by programmer is known as an ‘Error’.
This could happen because of the following reasons:
• Because of some confusion in understanding the
functionality of the software
• Because of some miscalculation of the values
• Because of misinterpretation of any value, etc.
What is a Failure in software testing?
• If under certain circumstances these defects get executed
by the tester during the testing then it results into the
failure which is known as software failure.
• Not all defects result in failures, some may stay inactive in
the code and we may never notice them. Example:
the code and we may never notice them. Example:
Defects in dead code will never result in failures.
Error: Human mistake that caused fault
Fault: Discrepancy in code that causes a failure.
Failure: External behavior is incorrect
Test Characteristics
Kaner, Falk, and Nguyen [Kan93] suggest the following
attributes of a ―good‖ test:
• A good test has a high probability of finding an error
• A good test is not redundant
• A good test should be ―best of breed‖
• A good test should be neither too simple nor too complex
Artifacts of testing
Test case:
A test case is a set of conditions or variables under which a tester will
determine whether a system under test satisfies requirements or works
correctly.
Test plan:
A Software Test Plan is a document describing the testing scope and
activities. It is the basis for formally testing any software/product in a
project.
Test script:
A Test Script is a set of instructions (written using a
scripting/programming language) that is performed on a system under
test to verify that the system performs as expected. Test scripts are used
in automated testing
What are the principles of testing?
There are seven principles of testing. They are as follows:
1) Testing shows presence of defects:
2) Exhaustive testing is impossible
3) Early testing:
3) Early testing:
4) Pesticide paradox
5) Testing is context depending
6) Testing is context depending
7) Absence – of – errors fallacy
Software Testing Strategy
Unit testing : Code
Integration testing : Design
Integration testing : Design
Validation test : Requirements
System test : System engineering architecture
• Unit Testing – Used to detect errors from each
software component individually.
• Integration Testing – Verification & program
construction when interaction occurs within
component occurs.
• System Testing – Elements forming a system is
• System Testing – Elements forming a system is
testes.
• Validation Testing – S/w validation meets
functional, behavioural & performance
requirements.
INTERNAL AND EXTERNAL VIEWS OF TESTING
Testing is done by two methods
1. Black box testing
2. White box testing
Black-box testing alludes to tests that are conducted at the
Black-box testing alludes to tests that are conducted at the
software interface. A black-box test examines some
fundamental aspect of a system with little regard for the
internal logical structure of the software.
White-box testing of software is predicated on close
examination of procedural detail. Logical paths through the
software and collaborations between components are
tested by exercising specific sets of conditions and/or loops.
BLACK BOX TESTING
• Black box testing is also known as functional
testing and Behavioral Testing.
• It is a software testing technique whereby the
internal workings of the item being tested are not
known by the tester.
In black box testing the tester only knows the
• In black box testing the tester only knows the
inputs and what the expected outcomes should be
and not how the program arrives at those outputs.
• The tester does not ever examine the
programming code and does not need any further
knowledge of the program other than its
specification
This method attempts to find errors in the
following categories:
• Incorrect or missing functions
• Interface errors
• Errors in data structures or external database
• Errors in data structures or external database
access
• Behavior or performance errors
• Initialization and termination errors
Advantages
• The test is unbiased because the designer and the tester are
independent of each other.
• The tester does not need knowledge of any specific
programming languages.
• The test is done from the point of view of the user, not the
designer.
• Test cases can be designed as soon as the specifications are
complete.
complete.
Disadvantages
• The test can be redundant if the software designer has already
run a test case.
• The test cases are difficult to design.
• Testing every possible input stream is unrealistic because it
would take a inordinate amount of time; therefore, many
program paths will go untested.
White - Box Testing
• White-box testing, sometimes called glass-box testing, is a test-case
design philosophy that uses the control structure described as part of
component-level design to derive test cases.
• Using white-box testing methods, can derive test cases that
(1) Guarantee that all independent paths within a module have been
exercised at least once,
exercised at least once,
(2) Exercise all logical decisions on their true and false sides,
(3) Execute all loops at their boundaries and within their operational
bounds
(4) Exercise internal data structures to ensure their validity
The structural testing is also called as white box testing
In structural testing derivation of test cases is according to program
structure.
The basis path testing technique is one of a number of techniques for
control structure testing.
White box testing involves the testing of the software
code for the following:
• Internal security holes
• Broken or poorly structured paths in the coding
processes
• The flow of specific inputs through the code
• Expected output
• The functionality of conditional loops
• Testing of each statement, object and function on
an individual basis
Advantages
• Testing can be commenced at an earlier stage. One need
not wait for the GUI to be available.
• Testing is more thorough, with the possibility of covering
most paths.
Disadvantages
• Since tests can be very complex, highly skilled resources
• Since tests can be very complex, highly skilled resources
are required, with thorough knowledge of programming
and implementation.
• Test script maintenance can be a burden if the
implementation changes too frequently.
• Since this method of testing it closely tied with the
application being testing, tools to cater to every kind of
implementation/platform may not be readily available.
Difference between Black box and White box testing
White box testing
1. Basis Path Testing
• Flow Graph Notation
• Independent Program Paths
• Deriving Test Cases
Graph Matrices
• Graph Matrices
2. Control Structure Testing
• Condition Testing
• Data Flow Testing
• Loop Testing
Flow Graph Notation
• A simple notation for the representation of control flow,
called a flow graph is introduced bellow.
• Each circle, called a flow graph node, represents
one or more procedural statements.
• A sequence of process boxes and a decision
diamond can map into a single node.
• The arrows on the flow graph, called edges or
links, represent flow of control and are analogous
links, represent flow of control and are analogous
to flowchart arrows.
• An edge must terminate at a node, even if the
node does not represent any procedural
statements.
• Areas bounded by edges and nodes are called
regions.
compound condition occurs when one or more Boolean operators
(logical OR, AND, NAND, NOR) is present in a conditional statement.
Referring to Figure the program design language (PDL) segment
translates into the flow graph shown. Note that a separate node is
created for each of the conditions a and b in the statement IF a OR b.
Each node that contains a condition is called a predicate node and is
characterized by two or more edges emanating from it
Independent Program Paths
Cyclomatic Complexity
• Cyclomatic complexity is software metric that
provides a quantitative measure of the logical
complexity of a program.
• When used in the context of the basis path testing
• When used in the context of the basis path testing
method, the value computed for cyclomatic
complexity defines the number of independent
paths in the basis set of a program and provides
us with an upper bound for the number of tests
that must be conducted to ensure that all
statements have been executed at least once
Independent Program Paths
Cyclomatic Complexity
Independent Program Paths
Cyclomatic Complexity
• Cyclomatic complexity is software metric that
provides a quantitative measure of the logical
complexity of a program.
• When used in the context of the basis path testing
• When used in the context of the basis path testing
method, the value computed for cyclomatic
complexity defines the number of independent
paths in the basis set of a program and provides
us with an upper bound for the number of tests
that must be conducted to ensure that all
statements have been executed at least once
• An independent path is any path through the
program that introduces at least one new set of
processing statements or a new condition. When
stated in terms of a flow graph, an independent
path must move along at least one edge that has
not been traversed before the path is defined. For
example, a set of independent paths for the flow
example, a set of independent paths for the flow
graph is
• path 1: 1-11
• path 2: 1-2-3-4-5-10-1-11
• path 3: 1-2-3-6-8-9-10-1-11
• path 4: 1-2-3-6-7-9-10-1-11
Cyclomatic complexity has a foundation in graph theory and
provides us with extremely useful software metric.
Complexity is computed in one of three ways:
1. The number of regions of the flow graph correspond to the
cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as
V(G) = E - N + 2
where E is the number of flow graph edges, N is the number of
flow graph nodes.
3. Cyclomatic complexity, V(G), for a flow graph, G, is also
defined as
V(G) = P + 1
where P is the number of predicate nodes contained in the flow
graph G.
1. The flow graph has four regions.
2. V(G) = 11 edges 9 nodes + 2 = 4.
3. V(G) = 3 predicate nodes + 1 = 4.
Deriving Test Cases
The basis path testing method can be applied to a procedural
design or to source code. In this section, we present basis path
testing as a series of steps. The procedure average, depicted in
PDL in figure below, will be used as an example to illustrate
each step in the test case design method. Note that average,
although an extremely simple algorithm, contains compound
conditions and loops
conditions and loops
• Using the design or code as a foundation, draw a
corresponding flow graph.
• Determine the cyclomatic complexity of the resultant flow graph
• Determine a basis set of linearly independent paths.
• Prepare test cases that will force execution of each path in the
basis set.
Graph Matrices
• To develop a software tool that assists in basis path
testing, a data structure, called a graph matrix, can be
quite useful.
• A graph matrix is a square matrix whose size (i.e.,
number of rows and columns) is equal to the number of
number of rows and columns) is equal to the number of
nodes on the flow graph.
• Each row and column corresponds to an identified node,
and matrix entries correspond to connections (an edge)
between nodes. A simple example of a flow graph and its
corresponding graph matrix is shown in figure.
CONTROL STRUCTURE TESTING
• Condition Testing
• Data Flow Testing
• Loop Testing
The basis path testing technique is one of a number of
The basis path testing technique is one of a number of
techniques for control structure testing. Although basis path
testing is simple and highly effective, it is not sufficient in
itself. Other variations on control structure testing helps in
broadening testing coverage and improve quality of white-
box testing.
Condition Testing
Condition testing is a test-case design method that
exercises the logical conditions contained in a program
module. A simple condition is a Boolean variable or a
relational expression, possibly preceded with one NOT (¬)
operator. A relational expression takes the form
E1 <relational-operator> E2
where E1 and E2 are arithmetic expressions and
<relational-operator> is one of the following: <, ≤, =, ≠
(nonequality), >, or ≥. A compound condition is composed
of two or more simple conditions, Boolean operators, and
parentheses.
Data Flow Testing
The data flow testing method selects test paths of a
program according to the locations of definitions and uses
of variables in the program.
For a statement with S as its statement number,
DEF(S) = {X | statement S contains a definition of X}
USE(S) = {X | statement S contains a use of X}
Loop Testing
Loop testing is a white-box testing technique that focuses
exclusively on the validity of loop constructs. Four different
classes of loops [Bei90] can be defined: simple loops,
concatenated loops, nested loops, and unstructured
loops as shown in Figure.
Classes of Loops
Simple loops. The following set of tests can be applied to
simple loops, where n is the maximum number of allowable
passes through the loop.
1. Skip the loop entirely.
1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m < n.
5. n -1, n, n + 1 passes through the loop.
Nested loops
Beizer suggests an approach that will help to reduce the
number of tests:
1. Start at the innermost loop. Set all other loops to
minimum values.
2. Conduct simple loop tests for the innermost loop while
2. Conduct simple loop tests for the innermost loop while
holding the outer loops at their minimum iteration
parameter (e.g., loop counter) values. Add other tests for
out-of-range or excluded values.
3. Work outward, conducting tests for the next loop, but
keeping all other outer loops at minimum values and other
nested loops to ―typical values.
4. Continue until all loops have been tested.
Concatenated loops
Concatenated loops can be tested using the approach
defined for simple loops, if each of the loops is independent
of the other.
Unstructured loops
Unstructured loops
Class of loops should be redesigned to reflect the use of
the structured programming constructs.
Concatenated loops
Concatenated loops can be tested using the approach
defined for simple loops, if each of the loops is independent
of the other.
Unstructured loops
Unstructured loops
Class of loops should be redesigned to reflect the use of
the structured programming constructs.
BLACK-BOX TESTING
Black-box testing, also called behavioural testing, focuses
on the functional requirements of the software. That is,
black-box testing techniques enable to derive sets of input
conditions that will fully exercise all functional requirements
for a program.
Black-box testing attempts to find errors in the following
categories:
(1) Incorrect or missing functions,
(2) Interface errors
(3) Errors in data structures or external database access,
(4) Behaviour or performance errors
(5) Initialization and termination errors
Types of Black box testing:
• Random Testing
• Equivalence class partitioning
• Boundary value analysis
• Graph based testing
• Orthogonal array testing
Random Testing
Random testing is a black-box software testing technique
where programs are tested by generating random,
independent inputs.
Results of the output are compared against software
specifications to verify that the test output is pass or fail.
• It eliminates exhaustive testing.
• It eliminates exhaustive testing.
• Random testing can save time and effort.
Equivalence class partitioning
• Equivalence partitioning (EP) is a specification-based or
black-box technique.
• It can be applied at any level of testing and is often a
good technique to use first.
• The idea behind this technique is to divide (i.e. to
• The idea behind this technique is to divide (i.e. to
partition) a set of test conditions into groups or sets that
can be considered the same (i.e. the system should
handle them equivalently), hence ‘equivalence
partitioning’.
• Equivalence partitions are also known as equivalence
classes – the two terms mean exactly the same thing.
Equivalence class partitioning
Boundary Value Analysis
A greater number of errors occurs at the boundaries of the
input domain rather than in the “center.” It is for this reason
that boundary value analysis (BVA) has been developed as
a testing technique. Boundary value analysis leads to a
selection of test cases that exercise bounding values
Boundary value analysis is a test-case design technique
that complements equivalence partitioning. Rather than
that complements equivalence partitioning. Rather than
selecting any element of an equivalence class, BVA
Leads to the selection of test cases at the “edges” of the
class.
Graph-Based Testing Methods
The first step in black-box testing is to understand the
objects that are modelled in software and the relationships
that connect these objects
The symbolic representation of a graph is shown
• Nodes are represented as circles connected by links that
take a number of different forms.
• A directed link (represented by an arrow) indicates that a
relationship moves in only one direction.
• A bidirectional link, also called a symmetric link, implies
• A bidirectional link, also called a symmetric link, implies
that the relationship applies in both directions.
• Parallel links are used when a number of different
relationships are established between graph nodes.
Orthogonal Array Testing
Orthogonal array testing can be applied to problems in
which the input domain is relatively small but too large to
accommodate exhaustive testing. The orthogonal array
testing method is particularly useful in finding region
faults—an error category associated with faulty logic within
a software component.
Regression testing
• Regression testing is the re execution of some subset of tests
that have already been conducted to ensure that changes have
not propagated unintended side effects.
• Regression testing is the activity that helps to ensure that
changes do not introduce unintended behavior or additional
errors.
• Each new addition or change to baselined software may cause
problems with functions that previously worked flawlessly
problems with functions that previously worked flawlessly
• Regression testing re-executes a small subset of tests that
have already been conducted
• Ensures that changes have not propagated unintended side
effects
• Helps to ensure that changes do not introduce unintended
behavior or additional errors
• May be done manually or through the use of automated
capture/playback tools
Regression testing
Regression test suite contains three different classes of
test cases
• A representative sample of tests that will exercise all
software functions
• Additional tests that focus on software functions that are
likely to be affected by the change
• Tests that focus on the actual software components that
have been changed
• Therefore, the regression test suite should be designed to
include only those tests that address one or more classes
of errors in each of the major program functions. It is
impractical and inefficient to reexecute every test for
every program function once a change has occurred.
Unit Testing
• Unit testing focuses verification effort on the smallest unit
of software design—the software component or module.
Using the component-level design description as a guide,
important control paths are tested to uncover errors within
the boundary of the module.
• The unit test focuses on the internal processing logic and
data structures within the boundaries of a component.
This type of testing can be conducted in parallel for
multiple components.
Unit-test considerations
• The module interface is tested to ensure that information
properly flows into and out of the program unit under test.
• Local data structures are examined to ensure that data stored
temporarily maintains its integrity during all steps in an
algorithm‘s execution.
• All independent paths through the control structure are
exercised to ensure that all statements in a module have been
exercised to ensure that all statements in a module have been
executed at least once.
• Boundary conditions are tested to ensure that the module
operates properly at boundaries established to limit or restrict
processing. And finally, all error-handling paths are tested.
• Selective testing of execution paths is an essential task during
the unit test. Test cases should be designed to uncover errors
due to erroneous computations, incorrect comparisons, or
improper control flow.
Unit-test procedures
• Unit testing is normally considered as an adjunct to the coding step.
• The design of unit tests can occur before coding begins or after
source code has been generated.
• A review of design information provides guidance for establishing test
cases that are likely to uncover errors.
• A component is not a stand-alone program; driver and/or stub
software must often be developed for each unit test.
• The unit test environment is illustrated in Figure. In most applications
a driver is nothing more than a ―main program‖ that accepts test
a driver is nothing more than a ―main program‖ that accepts test
case data, passes such data to the component (to be tested), and
prints relevant results.
• A stub or ―dummy subprogram‖ uses the subordinate module‘s
interface, may do minimal data manipulation, prints verification of
entry, and returns control to the module undergoing testing.
• Unit testing is simplified when a component with high cohesion is
designed.
• When only one function is addressed by a component, the number of
test cases is reduced and errors can be more easily predicted and
uncovered.
Integration Testing
• Integration testing is a systematic technique for constructing
the software architecture while at the same time conducting
tests to uncover errors associated with interfacing. The
objective is to take unit-tested components and build a program
structure that has been dictated by design.
• There is often a tendency to attempt nonincremental
integration; that is, to construct the program using a ―big bang
approach. All components are combined in advance. The entire
approach. All components are combined in advance. The entire
program is tested as a whole. A set of errors is encountered.
Correction is difficult because isolation of causes is
complicated by the vast expanse of the entire program. Once
these errors are corrected, new ones appear and the process
continues in a seemingly endless loop.
• The program is constructed and tested in small increments,
where errors are easier to isolate and correct; interfaces are
more likely to be tested completely; and a systematic test
approach may be applied.
Top-down integration
• Top-down integration testing is an incremental approach
to construction of the software architecture.
• Modules are integrated by moving downward through the
control hierarchy, beginning with the main control module
(main program).
• Modules subordinate (and ultimately subordinate) to the
main control module are incorporated into the structure in
either a depth-first or breadth-first manner.
• Depth-first integration integrates all components on a
major control path of the program structure.
• Selection of a major path is arbitrary and depends on
application-specific characteristics
• Breadth-first integration incorporates all components
directly subordinate at each level, moving across the
structure horizontally.
• The integration process is performed in a series of five
steps:
Top-down integration
1. The main control module is used as a test driver and
stubs are substituted for all components directly
subordinate to the main control module.
2. Depending on the integration approach selected (i.e.,
depth or breadth first), subordinate stubs are replaced one
at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is
replaced with the real component.
5. Regression testing may be conducted to ensure that new
errors have not been introduced.
The top-down integration strategy verifies major control or
decision points early in the test process.
Bottom-up integration:
Bottom-up integration testing, as its name implies, begins
construction and testing with atomic modules (i.e., components
at the lowest levels in the program structure). Because
components are integrated from the bottom up, the functionality
provided by components subordinate to a given level is always
available and the need for stubs is eliminated.
A bottom-up integration strategy may be implemented with the
following steps:
following steps:
1. Low-level components are combined into clusters (sometimes
called builds) that perform a specific software subfunction.
2. A driver (a control program for testing) is written to coordinate
test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving
upward in the program structure.
Integration test approaches
There are four types of integration testing approaches.
Those approaches are the following
1. Big-Bang Integration Testing
It is the simplest integration testing approach, where all the
modules are combining and verifying the functionality after
modules are combining and verifying the functionality after
the completion of individual module testing. In simple
words, all the modules of the system are simply put
together and tested. This approach is practicable only for
very small systems. If once an error is found during the
integration testing, it is very difficult to localize the error as
the error may potentially belong to any of the modules
being integrated. So, debugging errors reported during big
bang integration testing are very expensive to fix.
Bottom-Up Integration Testing
In bottom-up testing, each module at lower levels is tested with
higher modules until all modules are tested. The primary
purpose of this integration testing is, each subsystem is to test
the interfaces among various modules making up the
subsystem. This integration testing uses test drivers to drive and
pass appropriate data to the lower level modules.
Top-Down Integration Testing
Top-down integration testing technique used in order to simulate
the behaviour of the lower-level modules that are not yet
integrated. In this integration testing, testing takes place from top
to bottom. First high-level modules are tested and then low-level
modules and finally integrating the low-level modules to a high
level to ensure the system is working as intended
Mixed Integration Testing
A mixed integration testing is also called sandwiched
integration testing. A mixed integration testing follows a
combination of top down and bottom-up testing
approaches. In top-down approach, testing can start only
after the top-level module have been coded and unit tested.
after the top-level module have been coded and unit tested.
In bottom-up approach, testing can start only after the
bottom level modules are ready. This sandwich or mixed
approach overcomes this shortcoming of the top-down and
bottom-up approaches. A mixed integration testing is also
called sandwiched integration testing.
VALIDATION TESTING
• Validation testing begins at the culmination of integration
testing, when individual components have been exercised,
the software is completely assembled as a package, and
interfacing errors have been uncovered and corrected.
• At the validation or system level, the distinction between
conventional software, object-oriented software, and
WebApps disappears. Testing focuses on user-visible
actions and user-recognizable output from the system.
• If a Software Requirements Specification has been
developed, it describes all user-visible attributes of the
software and contains a Validation Criteria section that
forms the basis for a validation-testing approach
Software validation is achieved through a series of tests that
demonstrate conformity with requirements.
• A test plan outlines the classes of tests to be conducted, and a
test procedure defines specific test cases that are designed to
ensure that all functional requirements are satisfied, all
behavioral characteristics are achieved, all content is accurate
and properly presented, all performance requirements are
attained, documentation is correct, and usability and other
attained, documentation is correct, and usability and other
requirements are met (e.g., trans-portability, compatibility, error
recovery, maintainability).
• After each validation test case has been conducted, one of two
possible conditions exists: (1) The function or performance
characteristic conforms to specification and is accepted or (2) a
deviation from specification is uncovered and a deficiency list is
created. Deviations or errors discovered at this stage in a project
can rarely be corrected prior to scheduled delivery.
Configuration Review
An important element of the validation process is a
configuration review. The intent of the review is to ensure that
all elements of the software configuration have been properly
developed, are cataloged, and have the necessary detail to
bolster the support activities. The configuration review,
sometimes called an audit.
Alpha and Beta Testing
Alpha and Beta Testing
Most software product builders use a process called alpha and
beta testing to uncover errors that only the end user seems able
to find.
Alpha Testing
The alpha test is conducted at the developer‘s site by a
representative group of end users. The software is used in a
natural setting with the developer ―looking over the shoulder‖ of
the users and recording errors and usage problems. Alpha tests
are conducted in a controlled environment.
Beta Testing
• The beta test is conducted at one or more end-user sites.
Unlike alpha testing, the developer generally is not present.
Therefore, the beta test is a “live” application of the software
in an environment that cannot be controlled by the developer.
The customer records all problems (real or imagined) that are
encountered during beta testing and reports these to the
encountered during beta testing and reports these to the
developer at regular intervals. As a result of problems reported
during beta tests, you make modifications and then prepare for
release of the software product to the entire customer base.
• A variation on beta testing, called customer acceptance
testing, is sometimes performed when custom software is
delivered to a customer under contract. The customer performs
a series of specific tests in an attempt to uncover errors before
accepting the software from the developer
SYSTEM TESTING
System testing is actually a series of different tests whose
primary purpose is to fully exercise the computer-based
system.
Various types of system tests are
• Recovery Testing
• Security Testing
• Stress Testing
• Performance Testing
• Deployment Testing
Recovery Testing
Recovery testing is a system test that forces the software to fail
in a variety of ways and verifies that recovery is properly
performed. If recovery is automatic (performed by the system
itself), reinitialization, checkpointing mechanisms, data recovery,
and restart are evaluated for correctness.
Security Testing
Security Testing
• Security testing attempts to verify that protection mechanisms
built into a system, in fact, protect it from improper penetration.
• During security testing, the tester plays the role(s) of the
individual who desires to penetrate the system.
• The role of the system designer is to make penetration cost
more than the value of the information that will be obtained.
Stress Testing
• Stress tests are designed to confront programs with abnormal
situations.
• Stress testing executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume.
• A variation of stress testing is a technique called sensitivity
testing.
• Sensitivity testing attempts to uncover data combinations within
valid input classes that may cause instability or improper
processing.
processing.
Performance Testing
• Performance testing is designed to test the run-time
performance of software within the context of an integrated
system.
• Performance tests are often coupled with stress testing and
usually require both hardware and software instrumentation.
That is, it is often necessary to measure resource utilization
Deployment Testing
Deployment testing, sometimes called configuration
testing, exercises the software in each environment in
which it is to operate. In addition, deployment testing
examines all installation procedures and specialized
installation software (e.g., ―installers‖) that will be used by
customers, and all documentation that will be used to
customers, and all documentation that will be used to
introduce the software to end users.
DEBUGGING
Debugging occurs as a consequence of successful testing.
That is, when a test case uncovers an error, debugging is
the process that results in the removal of the error.
The Debugging process
The debugging process begins with the execution of a test
case. Results are assessed and a lack of correspondence
between expected and actual performance is encountered.
The debugging process will usually have one of two
The debugging process will usually have one of two
outcomes:
(1) The cause will be found and corrected or
(2) The cause will not be found. In the latter case, the
person performing debugging may suspect a cause, design
a test case to help validate that suspicion, and work toward
error correction in an iterative fashion
Why is debugging so difficult?
1. The symptom and the cause may be geographically remote.
2. The symptom may disappear (temporarily) when another error
is corrected.
3. The symptom may actually be caused by nonerrors (e.g.,
round-off inaccuracies).
4. The symptom may be caused by human error that is not easily
traced.
5. The symptom may be a result of timing problems, rather than
5. The symptom may be a result of timing problems, rather than
processing problems.
6. It may be difficult to accurately reproduce input conditions
(e.g., a real-time application in which input ordering is
indeterminate).
7. The symptom may be intermittent. This is particularly common
in embedded systems that couple hardware and software
inextricably.
8. The symptom may be due to causes that are distributed
across a number of tasks running on different processors.
Debugging Strategies
In general, three debugging strategies have been proposed
(1) brute force,
(2) backtracking, and
(3) cause elimination.
• Each of these strategies can be conducted manually, but
modern debugging tools can make the process much
modern debugging tools can make the process much
more effective.
Debugging tactics.
• The brute force category of debugging is probably the
most common and least efficient method for isolating the
cause of a software error.
Backtracking is a fairly common debugging approach that can
be used successfully in small programs. Beginning at the site
where a symptom has been uncovered, the source code is
traced backward (manually) until the cause is found.
Unfortunately, as the number of source lines increases, the
number of potential backward paths may become unmanageably
large.
The third approach to debugging—cause elimination—is
The third approach to debugging—cause elimination—is
manifested by induction or deduction and introduces the concept
of binary partitioning.
Automated debugging:
• Each of these debugging approaches can be supplemented
with debugging tools.
• A wide variety of debugging compilers, dynamic debugging
aids (―tracers‖), automatic test-case generators, and cross-
reference mapping tools are available.
Correcting the Error
Once a bug has been found, it must be corrected.
Van Vleck suggests three simple questions that you should
ask before making the ―correction‖ that removes the
cause of a bug:
• Is the cause of the bug reproduced in another part of the
• Is the cause of the bug reproduced in another part of the
program?
• What ―next bug‖ might be introduced by the fix I‘m about
to make?
• What could we have done to prevent this bug in the first
place?
Correcting the Error
Once a bug has been found, it must be corrected.
Van Vleck suggests three simple questions that you should
ask before making the ―correction‖ that removes the
cause of a bug:
• Is the cause of the bug reproduced in another part of the
• Is the cause of the bug reproduced in another part of the
program?
• What ―next bug‖ might be introduced by the fix I‘m about
to make?
• What could we have done to prevent this bug in the first
place?
Software Implementation Techniques
After detailed system design we get a system design which can be
transformed into implementation model. The goal coding is to
implement the design in the best possible manner. Coding affects both
testing and maintenance very deeply. The coding should be done in.
such a manner that the instead of getting the job of programmer
simplified the task of testing and maintenance phase should get
simplified.
Various objectives of coding are –
Various objectives of coding are –
1. Programs developed in coding should be readable.
2. They should execute efficiently.
3. The program should utilize less amount of memory.
4. The programs should not be lengthy
If the objectives are clearly specified before the programmers then
while coding they try to achieve the specified objectives. To achieve
these objectives some programming principles must be followed.
Coding Practices
There are some commonly used programming practices that help in
avoiding the common errors. These are enlisted below –
1. Control construct
The single entry and single exit constructs need to be used. The
standard control constructs must be used instead of using wide variety
of controls.
2. Use of gotos
The goto statements make the program unstructured and it also
imposes overhead on compilation process. Hence avoid use of goto
statements as far as possible and another alternative must be thought
of.
3. Information hiding
Information hiding should be supported as far as possible. In that case
only access functions to the data structures must be made visible and
the information present in it must be hidden.
4. Nesting
Nesting means defining one structure inside another. If the nesting
is too deep then it becomes hard to understand the code. Hence
as far as possible - avoid deep nesting of the code.
5. User defined data types
Modern programming languages allow the user to use defined
data types as the enumerated types. Use of user defined data
types enhances the readability of the code.
types enhances the readability of the code.
6. Module size
There is no standard rule about the size of the module but the
large size of the module will not be functionally cohesive.
7. Module interface
Complex module interface must be carefully examined. A simple
rule of thumb is that the module interface with more than five
parameters must be broken into multiple modules with simple
interface.
8. Side effects
Avoid obscure side effects. If some part of the code is changed
randomly then it will cause some side effect. For example if
number of parameters passed to the function is changed then it
will be difficult to understand the purpose of that function.
9. Robustness
The program is said to robust if it does something even though
some unexceptional condition occurs. In such situations the
programs do not crash but it exits gracefully.
10. Switch case with defaults
10. Switch case with defaults
The choice being passed to the switch case statement may have
some unpredictable value, and then the default case will help to
execute the switch case statement without any problem. Hence it
is a good practice to always have default case in the switch
statement.
11. Empty catch block
If the exception is caught but if there is no action then it is not a
good practice. Therefore take some default action even if it is just
writing some print statement, whenever the exception is caught
12. Empty if and while statements
In the if and while statements some conditions are checked.
Hence if we write some empty block on checking these
conditions then those checks are proved to be useless
checks. Such useless checks should be avoided.
13. Check for read return
Many times the return values that we obtain from the read
functions are not checked because we blindly believe that
functions are not checked because we blindly believe that
the desired result is present in the corresponding variable
when the read function is performed. But this may cause
some serious errors. hence the return value must be
checked for the read operation.
14. Return from Finally Block
The return value must come from finally block whenever it is
possible. This helps in distinguishing the data values that are
returning from the try-catch statements
15. Trusted Data sources
Counter check should be made before accessing the input
data. For example while reading the data from the file one
must check whether the data accessed is NULL or not.
16. Correlated Parameters
Many often there occurs co-relation between the data items.
It is a good practice to check these co-relations before
performing any operation on those data items.
17. Exceptions Handling
If due to some input condition if the program does not follow
the main path and follows an exceptional path. In such a
situation, an exceptional path may get followed. In order to
make the software more reliable, it necessary to write the
code for execution.
Coding Standards
Any good software development approach suggests to adhere to
some well-defined standards or rules for coding. These rules are
called coding standards.
1. Naming Conventions
1. Following are some commonly used naming conventions in the
coding
2. Package name and variable names should be in lower case.
3. Variable names must not begin with numbers.
3. Variable names must not begin with numbers.
4. The type name should be noun and it should start with capital
letter.
5. Constants must be in upper case (For example PI, SIZE)
6. Method name must be given in lower case.
7. The variables with large scope must have long name. For
example count_total, sum, Variables with short scope must
have short name. For example i,j.
8. The prefix is must be used for Boolean type of variables. For
example isEmpty or isFull
2. Files
Reader must get an idea about the purpose of the file by its name.
In some programming language like Java –
The file extension must be Java.
The name of the file and the class defined in the file must have the
same
Line length in the file must be limited to 80 characters.
3. Commenting/Layout
Comments are non-executable part of the code. But it is very
important because it enhances the readability of the code. The
purpose of the code is to explain the logic of the program.
Single line comments must be given by //
For the names of the variables comments must be given.
A black of comment must be enclosed within /* and */.
4. Statements
• There are some guidelines about the declaration and
executable statements.
• Declare some related variables on same line and unrelated
variables on another line.
• Class variable should never be declared public.
• Class variable should never be declared public.
• Make use of only loop control within the for loop.
• Avoid make use of break and continue statements in the
loop.
• Avoid complex conditional expressions. Make use of
temporary variables instead.
• Avoid the use of do...while statement.
Code refactoring
• Code refactoring is the process of restructuring existing
computer code – changing the factoring – without changing
its external behavior. Refactoring improves non functional
attributes of the software. Advantages include improved
code readability and reduced complexity to improve source
code maintainability, and create a more expressive internal
code maintainability, and create a more expressive internal
architecture or object model to improve extensibility.
• By continuously improving the design of code, we make it
easier and easier to work with. This is in sharp contrast to
what typically happens: little refactoring and a great deal of
attention paid to expediently adding new features. If you get
into the hygienic habit of refactoring continuously, you'll find
that it is easier to extend and maintain code.
There are two general categories of benefits to the activity of
refactoring.
1. Maintainability. It is easier to fix bugs because the
source code is easy to read and the intent of its author is
easy to grasp. This might be achieved by reducing large
monolithic routines into a set of individually concise, well-
named, single-purpose methods. It might be achieved by
named, single-purpose methods. It might be achieved by
moving a method to a more appropriate class, or by
removing misleading comments.
2. Extensibility. It is easier to extend the capabilities of the
application if it uses recognizable design patterns, and it
provides some flexibility where none before may have
existed.
Refactoring is needed due to various reasons. They are as follows:
• Lack of Modularity – Existing feature of one application can’t be
used in another application due to its tight coupling with the
application components.
• Lack of Reusable Components – There are instances of code
duplicity and potential reusable components dependency on
application code.
• Lack of Pluggable Components – Existing components are not
• Lack of Pluggable Components – Existing components are not
easily replaceable due to its application code tightly coupled with
the components.
• Service Oriented Architecture – Scope for SOA components
where each component can work as a service and reusable.
• Code Redundancy – Application has lots of dead code and
duplicate code.
• Lack of Layered Architecture – Any change in one layer causes
changes in all other layers.
• Poor Coding Style – Coding standards has not been
followed properly which includes improper names to
objects/methods, accessing the fields without getter/setters.
• Illogical Methods Composition – Illogical grouping of
methods in one class.
• Improper Packaging – Artifacts are placed in the
application code which can be kept at other locations;
forcing developer to change the jars in each of the
forcing developer to change the jars in each of the
application manually instead of updating it at a centered
location.
• Use of Old Version of Third Party Application/Jars –
Application is using older version of software’s instead of
using latest version and hence new features can’t be used
and explored in the application.
Steps for Refactoring:
1. Writing unit test cases – Test cases should be written to test the
application behavior and ensure that it is unchanged after every
cycle of refactoring.
2. Identifying the task for refactoring – Identify the set of tasks for
performing the refactoring.
3. Find the problem – Find out if there is any issue with the current
piece of code and if yes specify what is the problem?
4. Evaluate/Analyze the problem – After finding out the potential
4. Evaluate/Analyze the problem – After finding out the potential
problem evaluate it against the risks involved and the benefits.
5. Design solution – Find out what will be the resultant code after
refactoring of the code. Design solution which leads from current
state to the target state.
6. Modify the code – Refactor the code without changing the outer
behavior of the code.
7. Test refactored code if fails rollback to the last smaller change
and repeat the refactoring in a different way.
8. Repeat above cycle until the current code moves to the target
state.
Risks Involved in Refactoring
• If there is a requirement for code change then refactoring
occurs. The main risk of refactoring is that existing working
code may break due to the changes being made. This is the
main reason why most often refactoring is not done. In
order to mitigate or avoid this risk following two rules must
be followed:
be followed:
• Rule 1: Re-factor in small steps.
• Rule 2: For testing the existing functionalities make use of
test scripts.
• By following these two rules the bugs in the refactoring can
be easily identified and corrected. In each refactoring only
small change is made but the series of refactoring makes a
significant transformation in the program structure
Key Benefits from Refactoring
• Improves software expendability
• Reduces maintenance cost
• Provides standardized code
• Architecture improvement without impacting software
behavior
behavior
• Provides more readable and modular code
• Refactored modular component increases potential
reusability.
Software maintenance
• “The modification of a software product after delivery to correct
faults, to improve performance or other attributes, or to adapt the
product to a modified environment.”
• Modifying a program after it has been put into use, after delivery
to customer.
• Maintenance does not normally involve major changes to the
system’s architecture
• Maintenance is often contracted out/ outsourced
• Maintenance is often contracted out/ outsourced
There are number of reasons, why modifications are required,
some of them are briefly mentioned below:
• Market Conditions
• Client Requirements
• Host Modifications
• Organization Changes
Types of software maintenance
Corrective maintenance
• Impractical to exhaustively test a large software system.
Therefore reasonable to assume that any large system will
have errors.
have errors.
• Testing should be thorough for common cases, so errors
likely to be obscure.
• The process of receiving reports of such errors, diagnosing
the problem, and fixing it is called "corrective maintenance"
Adaptive maintenance
• Systems don't function in isolation.
• Typically they may interact with operating systems,
DBMS's, GUI's, network protocols, other external software
packages, and various hardware platforms
• In the IT industry any or all of these may change over a
• In the IT industry any or all of these may change over a
very short period (typically six months)
• The process of assessing the effects of such
"environmental changes" on a software system, and then
modifying the system to cope with those changes is known
as "adaptive maintenance
Perfective maintenance
• Users and marketers are never satisfied
• Even if a system is wildly successful, someone will want
new or enhanced features added to it
• Sometimes there will also be impetus to alter the way a
certain component of the system works, or its interface
certain component of the system works, or its interface
• The process of receiving suggestions and requests for such
enhancements or modifications, evaluating
Preventative maintenance
• Sometimes changes are needed for entirely internal
reasons
• Such changes have no direct discernible effect on the user,
but lay the groundwork for easier maintenance in the future
• Alternatively, such changes may improve reliability, or
• Alternatively, such changes may improve reliability, or
provide a better basis for future development
• The process of planning such code reorganizations,
implementing them, and testing to ensure that they have no
adverse impact is known as "preventative maintenance"
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A
study on estimating software maintenance found that the
cost of maintenance is as high as 67% of the cost of entire
software process cycle. On an average, the cost of software
maintenance is more than 50% of all SDLC phases.
Typical problems with maintenance
• Inadequate documentation of software evolution
• Inadequate documentation of software design and structure
• Loss of "cultural" knowledge of software due to staff
turnover
• Lack of allowance for change in original software design
• Lack of allowance for change in original software design
• Maintenance is unglamorous and may be viewed as a
"punishment task"
Software-end factors affecting Maintenance Cost
• Structure of Software Program
• Programming Language
• Dependence on external environment
• Staff reliability and availability
Maintenance Activities
IEEE provides a framework for sequential maintenance
process activities. It can be used in iterative manner and can
be extended so that customized items and processes can be
included.
• Identification & Tracing - It involves activities pertaining to
identification of requirement of modification or maintenance. It is
generated by user or system may itself report via logs or error
messages. Here, the maintenance type is classified also.
• Analysis - The modification is analyzed for its impact on the
system including safety and security implications. If probable
impact is severe, alternative solution is looked for. A set of
required modifications is then materialized into requirement
specifications. The cost of modification/maintenance is analyzed
specifications. The cost of modification/maintenance is analyzed
and estimation is concluded.
• Design - New modules, which need to be replaced or modified,
are designed against requirement specifications set in the
previous stage. Test cases are created for validation and
verification.
• Implementation - The new modules are coded with the help of
structured design created in the design step.Every programmer
is expected to do unit testing in parallel.
• System Testing - Integration testing is done among newly
created modules. Integration testing is also carried out between
new modules and the system. Finally the system is tested as a
whole, following regressive testing procedures.
• Acceptance Testing - After testing the system internally, it is
tested for acceptance with the help of users. If at this state, user
complaints some issues they are addressed or noted to address
in next iteration.
• Delivery - After acceptance test, the system is deployed all over
• Delivery - After acceptance test, the system is deployed all over
the organization either by small update package or fresh
installation of the system. The final testing takes place at client
end after the software is delivered.
Training facility is provided if required, in addition to the hard copy
of user manual.
• Maintenance management - Configuration management is an
essential part of system maintenance. It is aided with version
control tools to control versions, semi-version or patch
management.
Reengineering:
• The process typically encompasses a combination of other processes
such as reverse engineering, re-documentation, restructuring, translation,
and forward engineering.
• For example, initially Unix was developed in assembly language. When
language C came into existence, Unix was re-engineered in C, because
working in assembly language was difficult.
• Other than this, sometimes programmers notice that few parts of
software need more maintenance than others and they also need re-
engineering.
engineering.
Business Process Reengineering (BPR)
Also known as process innovation and core process redesign -
attempts to restructure or obliterate unproductive management
layers, wipe out redundancies, and remodel processes
differently. Business process re-engineering refers to the
analysis, control and development of a company’s systems and
workflow
Define Objectives and Framework:
• First of all, the objective of re-engineering must be defined in the
quantitative and qualitative terms. The objectives are the end
results that the management desires after the reengineering.
• Once the objectives are defined, the need for change should be
well communicated to the employees because, the success of
BPR depends on the readiness of the employees to accept the
change.
Identify Customer Needs:
• While, redesigning the business process the needs of the
customers must be taken into prior consideration. The process
customers must be taken into prior consideration. The process
shall be redesigned in such a way that it clearly provides the
added value to the customer. One must take the following
parameters into the consideration:
• Customer’s expected utilities in product and services
• Customer requirements, buying habits and consuming
tendencies.
• Customer problems and expectations about the product or
service.
Study the Existing Process:
Before deciding on the changes to be made in the existing business
process, one must analyze it carefully.
The existing process provides a base for the new process and hence
“what” and “why” of the new process can be well designed by studying
the right and wrongs of the existing business plan.
Formulate a Redesign Business Plan:
Once the existing business process is studied thoroughly, the required
Once the existing business process is studied thoroughly, the required
changes are written down on a piece of paper and is converted into an
ideal re-design process. Here, all the changes are chalked down, and the
best among all the alternatives is selected.
Implement the Redesign:
Finally, the changes are implemented into the redesign plan to achieve
the dramatic improvements. It is the responsibility of both the
management and the designer to operation alise the new process and
gain the support of all. Thus, the business process reengineering is
collection of interrelated tasks or activities designed to accomplish the
specified outcome.
Re-Engineering Process
• Decide what to re-engineer. Is it whole software or a part of
it?
• Perform Reverse Engineering, in order to obtain
specifications of existing software.
• Restructure Program if required. For example, changing
function-oriented programs into object-oriented programs.
function-oriented programs into object-oriented programs.
• Re-structure data as required.
• Apply Forward engineering concepts in order to get re-
engineered software.
.
Reengineering process model .
Code restructuring
• Source code is analyzed and violations of structured
programming practices are noted and repaired
• Revised code needs to be reviewed and tested
Data restructuring
Data restructuring
• Usually requires full reverse engineering
• Current data architecture is dissected
• Data models are defined
• Existing data structures are reviewed for quality
Reverse Engineering
• It is a process to achieve system specification by
thoroughly analyzing, understanding the existing system.
This process can be seen as reverse SDLC model, i.e. we
try to get higher abstraction level by analyzing lower
abstraction levels.
• An existing system is previously implemented design,
about which we know nothing. Designers then do reverse
engineering by looking at the code and try to get the
design.
• With design in hand, they try to conclude the
specifications. Thus, going in reverse from code to system
specification.
Forward Engineering
• Forward engineering is a process of obtaining desired
software from the specifications in hand which were
brought down by means of reverse engineering. It
assumes that there was some software engineering
already done in the past.
• Forward engineering is same as software engineering
process with only one difference – it is carried out always
after reverse engineering.
• In forward engineering, one takes a set of primitives of
interest, builds them into a working system, and then
observes what the system can and cannot do.
Difference between Forward Engineering
and Reverse Engineering
• Goal: The goal of forward engineering is to develop new
software from scratch, while the goal of reverse engineering is
to analyze and understand an existing software system.
• Process: Forward engineering involves designing and
implementing a new software system based on requirements
implementing a new software system based on requirements
and specifications. Reverse engineering involves analyzing an
existing software system to understand its design, structure,
and behavior.
• Tools and Techniques: Forward engineering often involves the
use of software development tools, such as IDEs, code
generators, and testing frameworks. Reverse engineering often
involves the use of reverse engineering tools, such as
decompilers, disassemblers, and code analyzers.
Difference between Forward
Engineering and Reverse Engineering
• Focus: Forward engineering focuses on the creation of
new code and functionality, while reverse engineering
focuses on understanding and documenting existing code
and functionality.
• Output: The output of forward engineering is a new
software system, while the output of reverse engineering is
documentation of an existing software system, such as a
UML diagram, flowchart, or software specification.
Software Reviews
• A review is any activity in which a work product is
exposed to reviewers to examine and give
feedback.
• Purpose is to find defects (errors) before they are
passed on to another software engineering
passed on to another software engineering
activity or released to the customer.
• Software engineers (and others) conduct formal
technical reviews (FTR)
• Using formal technical reviews is an effective
means for improving software quality.
Review Roles
• Presenter (designer/producer).
• Coordinator
• Recorder
• records events of meeting
• records events of meeting
• builds paper trail
• Reviewers
• maintenance experts
• standards bearer
• user representative
• others
Formal Technical Reviews
• Involves 3 to 5 people (including reviewers)
• Advance preparation (no more than 2 hours per
person) required
• Duration of review meeting should be less than 2
• Duration of review meeting should be less than 2
hours
• Focus of review is on a discrete work product
• Review leader organizes the review meeting at
the producer's request.
Formal Technical Reviews
• Reviewers ask questions that enable the producer to
discover his or her own error (the product is under
review not the producer)
• Producer of the work product walks the reviewers
through the product
through the product
• Recorder writes down any significant issues raised
during the review
• Reviewers decide to accept or reject the work product
or to go for additional reviews of the product.
 All work products and components like SRS, schedules,
design documents, code, test plans, test cases, and
defect reports should be reviewed.
Why Reviews?
• To improve the quality.
• Catch 80% of all errors if done properly.
• Catch both coding and design errors.
• Enforce the spirit of any organization standards.
• Training and insurance.
• Training and insurance.
• Useful not only for finding and eliminating defects, but
also for gaining consensus among the project team,
securing approval from stakeholders, and aiding in
professional development for team members.
• Help find defects soon after they are injected making
them cost less to fix than they would cost if they were
found in test.
Types of Review:
Inspections
• Inspections are moderated meetings in which
reviewers list all issues and defects they have
found in the document and log them so that they
can be addressed by the author.
can be addressed by the author.
• The goal of the inspection is to repair the defects
so that everyone on the inspection team can
approve the work product.
• Commonly inspected work products include
software requirements specifications and test
plans.
Types of Review:
Inspections
• Running an inspection meeting:
1. A work product is selected for review and a team is gathered
for an inspection meeting to review the work product.
2. A moderator is chosen to moderate the meeting.
3. Each inspector prepares for the meeting by reading the work
product and noting each defect.
product and noting each defect.
4. In an inspection, a defect is any part of the work product that
will keep an inspector from approving it.
5. Discussion is focused on each defect, and coming up with a
specific resolution.
• Job of the inspection team is not just identify the problems;
but also come up with the solutions.
6. The moderator compiles all of the defect resolutions into an
inspection log
Inspection Log Example
Types of Review:
Deskchecks
• A deskcheck is a simple review in which the
author of a work product distributes it to one
or more reviewers.
The author sends a copy of the work product
• The author sends a copy of the work product
to selected project team members. The team
members read it, and then write up defects
and comments to send back to the author.
Types of Review:
Deskchecks
• Unlike an inspection, a deskcheck does not
produce written logs which can be archived with
the document for later reference.
• Deskchecks can be used as predecessors to
• Deskchecks can be used as predecessors to
inspections.
• In many cases, having an author of a work
product pass his work to a peer for an informal
review will significantly reduce the amount of effort
involved in the inspection.
Types of Review:
Walkthroughs
• A walkthrough is an informal way of presenting a
technical document in a meeting.
• Unlike other kinds of reviews, the author runs the
walkthrough: calling the meeting, inviting the
reviewers, soliciting comments and ensuring that
reviewers, soliciting comments and ensuring that
everyone present understands the work product.
• After the meeting, the author should follow up with
individual attendees who may have had additional
information or insights.
• The document should then be corrected to reflect
any issues that were raised.
Types of Review:
Code Review
• A code review is a special kind of inspection in which
the team examines a sample of code and fixes any
defects in it.
• In a code review, a defect is a block of code
• which does not properly implement its requirements.
• which does not function as the programmer intended.
• which is not incorrect but could be improved . For
example, it could be made more readable or its
performance could be improved
Types of Review:
Code Review
• It’s important to review the code which is most
likely to have defects.
• Good candidates for code review include:
• A portion of the software that only one person has the
expertise to maintain.
expertise to maintain.
• Code that implements a highly abstract or tricky algorithm
• An object, library or API that is particularly difficult to work
with.
• Code written by inexperienced persons, or entirely new type
of code, or code written in an unfamiliar language
• Code which employs a new programming technique
• An area of the code that will be especially catastrophic if there
are defects
Types of Review:
Pair Programming
• Pair programming is a technique in which two
programmers work simultaneously at a single
computer and continuously review each others’
work.
Although many programmers were introduced to
• Although many programmers were introduced to
pair programming, it is a practice that can be
valuable in any development environment.
• Pair programming improves the organization by
ensuring that at least two programmers are able to
maintain any piece of the software.
Types of Review:
Pair Programming
• Pair programming works best if the pairs are
constantly rotated. Prefer to pair a junior person
with a senior for knowledge sharing.
• The project manager should not try to force pair
programming on the team; it helps to introduce the
programming on the team; it helps to introduce the
change slowly, and where it will meet the least
resistance.
• It is difficult to implement pair programming in an
organization where programmers do not share the
same nine-to-five work schedule.
• Some people do not work well in pairs, and some
pairs do not work well together.
Relative Cost of Defect Removal
Relative Cost of Defect Removal
Defect Amplification Model
Defect Prevention/ Removal
• S/w contains 200K lines
• Inspection time = 7053 Hrs.
• Defects prevented = 3112
• Programmer cost = 40.00 per hr.
• Total cost of defect prevention = 7053 x 40.00
• Total cost of defect prevention = 7053 x 40.00
= 282120.00
• Cost per defect prevention = 2120/3112
= 91.00
Defect Removal
• Defect escaped into product = 1 per 1K
• Total defects escaped = 200K/1000
= 200
Cost of removal of single defect = 25000
• Cost of removal of single defect = 25000
• Total defect removal cost = 25000 x 200
= 5000000
• Ratio of defect removal to prevention = 18
Formality and Timing
• Formal review presentations
• resemble conference presentations.
• Informal presentations
• less detailed, but equally correct.
less detailed, but equally correct.
• Early Reviews
• tend to be informal
• may not have enough information
• Later Reviews
• tend to be more formal
• Feedback may come too late to avoid rework
Formality and Timing
• Analysis is complete.
• Design is complete.
• After first compilation.
• After first test run.
• After first test run.
• After all test runs.
• Any time you complete an activity that produce a
complete work product.
Review Guidelines
• Keep it short (< 30 minutes).
• Don’t schedule two in a row.
• Don’t review product fragments.
• Use standards to avoid style disagreements.
• Use standards to avoid style disagreements.
• Let the coordinator run the meeting and
maintain order.
Scope Creep Effect
Scope Creep
Scope Creep
• After the programming has started, users and
stakeholders make changes.
• Each change is easy to describe, so it sounds
“small” and the programmers agree to it.
“small” and the programmers agree to it.
• Eventually, the project slows to a crawl
• It’s 90% done – with 90% left to go
• The programmers know that if they had been
told from the beginning what to build, they
could have built it quickly from the start.

More Related Content

Similar to Software Engineering TESTING AND MAINTENANCE

Software testing
Software testingSoftware testing
Software testing
Omar Al-Bokari
 
Sorfware engineering presentation (software testing)
Sorfware engineering presentation (software testing)Sorfware engineering presentation (software testing)
Sorfware engineering presentation (software testing)
1Arun_Pandey
 
Software Testing Technique in Software Engineering
Software Testing Technique in Software EngineeringSoftware Testing Technique in Software Engineering
Software Testing Technique in Software Engineering
Ismail Hussain
 
Software testing strategies
Software testing strategiesSoftware testing strategies
Software testing strategies
Sophia Girls' College(Autonomous), Ajmer
 
Testing chapter updated (1)
Testing chapter updated (1)Testing chapter updated (1)
Testing chapter updated (1)
abdullah619
 
Fundamentals of software part 1
Fundamentals of software part 1Fundamentals of software part 1
Fundamentals of software part 1
Siddharth Sharma
 
Structured system analysis and design
Structured system analysis and design Structured system analysis and design
Structured system analysis and design
Jayant Dalvi
 
Object Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slidesObject Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slides
Punjab University
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
Raju Raaj
 
L software testing
L   software testingL   software testing
L software testing
Fáber D. Giraldo
 
Software testing-and-analysis
Software testing-and-analysisSoftware testing-and-analysis
Software testing-and-analysis
WBUTTUTORIALS
 
Software testing
Software testingSoftware testing
Software testing
nidhip216
 
Learn Software Testing in 6 Lessons
Learn Software Testing in 6 LessonsLearn Software Testing in 6 Lessons
Learn Software Testing in 6 Lessons
Syed Ahmed
 
Testing (System Analysis and Design)
Testing (System Analysis and Design)Testing (System Analysis and Design)
Testing (System Analysis and Design)
Areeb Khan
 
S440999102
S440999102S440999102
S440999102
IJERA Editor
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testing
Syed Usman Ahmed
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
Saqib Raza
 
Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing Fundamentals
Chankey Pathak
 
A Software Tester
A Software TesterA Software Tester
A Software Tester
Abbasgulu Allahverdili
 
6. oose testing
6. oose testing6. oose testing
6. oose testing
Ashenafi Workie
 

Similar to Software Engineering TESTING AND MAINTENANCE (20)

Software testing
Software testingSoftware testing
Software testing
 
Sorfware engineering presentation (software testing)
Sorfware engineering presentation (software testing)Sorfware engineering presentation (software testing)
Sorfware engineering presentation (software testing)
 
Software Testing Technique in Software Engineering
Software Testing Technique in Software EngineeringSoftware Testing Technique in Software Engineering
Software Testing Technique in Software Engineering
 
Software testing strategies
Software testing strategiesSoftware testing strategies
Software testing strategies
 
Testing chapter updated (1)
Testing chapter updated (1)Testing chapter updated (1)
Testing chapter updated (1)
 
Fundamentals of software part 1
Fundamentals of software part 1Fundamentals of software part 1
Fundamentals of software part 1
 
Structured system analysis and design
Structured system analysis and design Structured system analysis and design
Structured system analysis and design
 
Object Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slidesObject Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slides
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 
L software testing
L   software testingL   software testing
L software testing
 
Software testing-and-analysis
Software testing-and-analysisSoftware testing-and-analysis
Software testing-and-analysis
 
Software testing
Software testingSoftware testing
Software testing
 
Learn Software Testing in 6 Lessons
Learn Software Testing in 6 LessonsLearn Software Testing in 6 Lessons
Learn Software Testing in 6 Lessons
 
Testing (System Analysis and Design)
Testing (System Analysis and Design)Testing (System Analysis and Design)
Testing (System Analysis and Design)
 
S440999102
S440999102S440999102
S440999102
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testing
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing Fundamentals
 
A Software Tester
A Software TesterA Software Tester
A Software Tester
 
6. oose testing
6. oose testing6. oose testing
6. oose testing
 

More from Dr Anuranjan Misra

Software Engineering Software Project Management
Software Engineering Software Project ManagementSoftware Engineering Software Project Management
Software Engineering Software Project Management
Dr Anuranjan Misra
 
Software Engineering - SOFTWARE DESIGN Process
Software Engineering - SOFTWARE DESIGN ProcessSoftware Engineering - SOFTWARE DESIGN Process
Software Engineering - SOFTWARE DESIGN Process
Dr Anuranjan Misra
 
Software Engineering REQUIREMENTS ANALYSIS AND SPECIFICATION
Software Engineering REQUIREMENTS ANALYSIS AND SPECIFICATIONSoftware Engineering REQUIREMENTS ANALYSIS AND SPECIFICATION
Software Engineering REQUIREMENTS ANALYSIS AND SPECIFICATION
Dr Anuranjan Misra
 
Software Engineering Perspective and Specialized Process Models
Software Engineering Perspective and Specialized Process ModelsSoftware Engineering Perspective and Specialized Process Models
Software Engineering Perspective and Specialized Process Models
Dr Anuranjan Misra
 
Various Process of Software Engineering notes
Various Process of Software Engineering notesVarious Process of Software Engineering notes
Various Process of Software Engineering notes
Dr Anuranjan Misra
 
Introduction to Software Engineering Notes
Introduction to Software Engineering NotesIntroduction to Software Engineering Notes
Introduction to Software Engineering Notes
Dr Anuranjan Misra
 

More from Dr Anuranjan Misra (6)

Software Engineering Software Project Management
Software Engineering Software Project ManagementSoftware Engineering Software Project Management
Software Engineering Software Project Management
 
Software Engineering - SOFTWARE DESIGN Process
Software Engineering - SOFTWARE DESIGN ProcessSoftware Engineering - SOFTWARE DESIGN Process
Software Engineering - SOFTWARE DESIGN Process
 
Software Engineering REQUIREMENTS ANALYSIS AND SPECIFICATION
Software Engineering REQUIREMENTS ANALYSIS AND SPECIFICATIONSoftware Engineering REQUIREMENTS ANALYSIS AND SPECIFICATION
Software Engineering REQUIREMENTS ANALYSIS AND SPECIFICATION
 
Software Engineering Perspective and Specialized Process Models
Software Engineering Perspective and Specialized Process ModelsSoftware Engineering Perspective and Specialized Process Models
Software Engineering Perspective and Specialized Process Models
 
Various Process of Software Engineering notes
Various Process of Software Engineering notesVarious Process of Software Engineering notes
Various Process of Software Engineering notes
 
Introduction to Software Engineering Notes
Introduction to Software Engineering NotesIntroduction to Software Engineering Notes
Introduction to Software Engineering Notes
 

Recently uploaded

The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
History of Stoke Newington
 
Film vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movieFilm vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movie
Nicholas Montgomery
 
How to Fix the Import Error in the Odoo 17
How to Fix the Import Error in the Odoo 17How to Fix the Import Error in the Odoo 17
How to Fix the Import Error in the Odoo 17
Celine George
 
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
GeorgeMilliken2
 
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptxChapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Pengantar Penggunaan Flutter - Dart programming language1.pptx
Pengantar Penggunaan Flutter - Dart programming language1.pptxPengantar Penggunaan Flutter - Dart programming language1.pptx
Pengantar Penggunaan Flutter - Dart programming language1.pptx
Fajar Baskoro
 
MARY JANE WILSON, A “BOA MÃE” .
MARY JANE WILSON, A “BOA MÃE”           .MARY JANE WILSON, A “BOA MÃE”           .
MARY JANE WILSON, A “BOA MÃE” .
Colégio Santa Teresinha
 
How to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 InventoryHow to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 Inventory
Celine George
 
Community pharmacy- Social and preventive pharmacy UNIT 5
Community pharmacy- Social and preventive pharmacy UNIT 5Community pharmacy- Social and preventive pharmacy UNIT 5
Community pharmacy- Social and preventive pharmacy UNIT 5
sayalidalavi006
 
Liberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdfLiberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdf
WaniBasim
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
IreneSebastianRueco1
 
Pride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School DistrictPride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School District
David Douglas School District
 
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective UpskillingYour Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Excellence Foundation for South Sudan
 
DRUGS AND ITS classification slide share
DRUGS AND ITS classification slide shareDRUGS AND ITS classification slide share
DRUGS AND ITS classification slide share
taiba qazi
 
PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.
Dr. Shivangi Singh Parihar
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Dr. Vinod Kumar Kanvaria
 
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
Nguyen Thanh Tu Collection
 
Main Java[All of the Base Concepts}.docx
Main Java[All of the Base Concepts}.docxMain Java[All of the Base Concepts}.docx
Main Java[All of the Base Concepts}.docx
adhitya5119
 
Digital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental DesignDigital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental Design
amberjdewit93
 

Recently uploaded (20)

The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
 
Film vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movieFilm vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movie
 
How to Fix the Import Error in the Odoo 17
How to Fix the Import Error in the Odoo 17How to Fix the Import Error in the Odoo 17
How to Fix the Import Error in the Odoo 17
 
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
 
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptxChapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
 
Pengantar Penggunaan Flutter - Dart programming language1.pptx
Pengantar Penggunaan Flutter - Dart programming language1.pptxPengantar Penggunaan Flutter - Dart programming language1.pptx
Pengantar Penggunaan Flutter - Dart programming language1.pptx
 
MARY JANE WILSON, A “BOA MÃE” .
MARY JANE WILSON, A “BOA MÃE”           .MARY JANE WILSON, A “BOA MÃE”           .
MARY JANE WILSON, A “BOA MÃE” .
 
How to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 InventoryHow to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 Inventory
 
Community pharmacy- Social and preventive pharmacy UNIT 5
Community pharmacy- Social and preventive pharmacy UNIT 5Community pharmacy- Social and preventive pharmacy UNIT 5
Community pharmacy- Social and preventive pharmacy UNIT 5
 
Liberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdfLiberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdf
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
 
Pride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School DistrictPride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School District
 
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective UpskillingYour Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective Upskilling
 
DRUGS AND ITS classification slide share
DRUGS AND ITS classification slide shareDRUGS AND ITS classification slide share
DRUGS AND ITS classification slide share
 
PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
 
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
 
Main Java[All of the Base Concepts}.docx
Main Java[All of the Base Concepts}.docxMain Java[All of the Base Concepts}.docx
Main Java[All of the Base Concepts}.docx
 
Digital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental DesignDigital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental Design
 

Software Engineering TESTING AND MAINTENANCE

  • 1. 1 Software Engineering KCS-601, Unit-IV Dr APJ Abdul Kalam Technical University, Lucknow By Dr Anuranjan Misra Dr Anuranjan Misra innovation Ambassador Ministry of Education, Government of India & Professor & Dean, GNIOT, Greater Noida
  • 2. Testing Objectives • Testing is the process of executing a program with the intent of finding errors. • A good test case is one with a high probability of finding an as-yet undiscovered error. finding an as-yet undiscovered error. • A successful test is one that discovers an as-yet- undiscovered error.
  • 3. Testing Principles • All tests should be traceable to customer requirements. • Tests should be planned before testing begins. • 80% of all errors are in 20% of the code. • Testing should begin in the small and progress to the large. • Exhaustive testing is not possible for real programs due to combinatorial explosion of possible test cases • Testing should be conducted by an independent third party if possible.
  • 4. Debugging vs. Testing • Debugging is the process of finding errors in a program under development that is not thought to be correct. • Testing is the process of attempting to find errors in a program that is thought to be correct. in a program that is thought to be correct. • Testing attempts to establish that a program satisfies its specifications. • Testing can establish the presence of errors but cannot guarantee for their absence. • Amount of testing performed must be balanced against the cost of undiscovered errors.
  • 5. Verification and Validation • The distinction between the two terms is largely to do with the role of specifications. • Validation is the process of checking whether the specification captures the customer's needs. That is, Specifications are as per customer is, Specifications are as per customer requirements/needs. • Whereas, verification is the process of checking that the software meets the specification. That is, software is as per specifications. • Testing = Validation + Verification
  • 6. SOFTWARE TESTING FUNDAMENTALS What is Testing? • Testing is a process of exercising software to verify that it satisfies specified requirements and to detect errors. • Software testing is a process of executing a program or • Software testing is a process of executing a program or application with the intent of finding the software bugs. • It can also be stated as the process of validating and verifying that a software program or application or product: 1. Meets the business and technical requirements that guided it’s design and development 2. Works as expected 3. Can be implemented with the same characteristic.
  • 7. Software Testability is simply how easily a computer program can be tested. The following characteristics lead to testable software. • Operability • Observability • Observability • Controllability • Decomposability • Simplicity • Stability • Understandability
  • 8. Why is Software Testing necessary? Software testing is very important because of the following reasons: 1. Software testing is really required to point out the defects and errors that were made during the development phases. 2. It’s essential since it makes sure of the Customer’s reliability and their satisfaction in the application. 3. It is very important to ensure the Quality of the product. Quality product delivered to the customers helps in gaining their confidence. 4. Testing is required for an effective performance of software application or product.
  • 9. What is an error in software testing? • The mistake made by programmer is known as an ‘Error’. This could happen because of the following reasons: • Because of some confusion in understanding the functionality of the software • Because of some miscalculation of the values • Because of misinterpretation of any value, etc.
  • 10. What is a Failure in software testing? • If under certain circumstances these defects get executed by the tester during the testing then it results into the failure which is known as software failure. • Not all defects result in failures, some may stay inactive in the code and we may never notice them. Example: the code and we may never notice them. Example: Defects in dead code will never result in failures. Error: Human mistake that caused fault Fault: Discrepancy in code that causes a failure. Failure: External behavior is incorrect
  • 11. Test Characteristics Kaner, Falk, and Nguyen [Kan93] suggest the following attributes of a ―good‖ test: • A good test has a high probability of finding an error • A good test is not redundant • A good test should be ―best of breed‖ • A good test should be neither too simple nor too complex
  • 12. Artifacts of testing Test case: A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly. Test plan: A Software Test Plan is a document describing the testing scope and activities. It is the basis for formally testing any software/product in a project. Test script: A Test Script is a set of instructions (written using a scripting/programming language) that is performed on a system under test to verify that the system performs as expected. Test scripts are used in automated testing
  • 13. What are the principles of testing? There are seven principles of testing. They are as follows: 1) Testing shows presence of defects: 2) Exhaustive testing is impossible 3) Early testing: 3) Early testing: 4) Pesticide paradox 5) Testing is context depending 6) Testing is context depending 7) Absence – of – errors fallacy
  • 14. Software Testing Strategy Unit testing : Code Integration testing : Design Integration testing : Design Validation test : Requirements System test : System engineering architecture
  • 15.
  • 16. • Unit Testing – Used to detect errors from each software component individually. • Integration Testing – Verification & program construction when interaction occurs within component occurs. • System Testing – Elements forming a system is • System Testing – Elements forming a system is testes. • Validation Testing – S/w validation meets functional, behavioural & performance requirements.
  • 17. INTERNAL AND EXTERNAL VIEWS OF TESTING Testing is done by two methods 1. Black box testing 2. White box testing Black-box testing alludes to tests that are conducted at the Black-box testing alludes to tests that are conducted at the software interface. A black-box test examines some fundamental aspect of a system with little regard for the internal logical structure of the software. White-box testing of software is predicated on close examination of procedural detail. Logical paths through the software and collaborations between components are tested by exercising specific sets of conditions and/or loops.
  • 18. BLACK BOX TESTING • Black box testing is also known as functional testing and Behavioral Testing. • It is a software testing technique whereby the internal workings of the item being tested are not known by the tester. In black box testing the tester only knows the • In black box testing the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. • The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specification
  • 19. This method attempts to find errors in the following categories: • Incorrect or missing functions • Interface errors • Errors in data structures or external database • Errors in data structures or external database access • Behavior or performance errors • Initialization and termination errors
  • 20.
  • 21. Advantages • The test is unbiased because the designer and the tester are independent of each other. • The tester does not need knowledge of any specific programming languages. • The test is done from the point of view of the user, not the designer. • Test cases can be designed as soon as the specifications are complete. complete. Disadvantages • The test can be redundant if the software designer has already run a test case. • The test cases are difficult to design. • Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.
  • 22. White - Box Testing • White-box testing, sometimes called glass-box testing, is a test-case design philosophy that uses the control structure described as part of component-level design to derive test cases. • Using white-box testing methods, can derive test cases that (1) Guarantee that all independent paths within a module have been exercised at least once, exercised at least once, (2) Exercise all logical decisions on their true and false sides, (3) Execute all loops at their boundaries and within their operational bounds (4) Exercise internal data structures to ensure their validity The structural testing is also called as white box testing In structural testing derivation of test cases is according to program structure. The basis path testing technique is one of a number of techniques for control structure testing.
  • 23. White box testing involves the testing of the software code for the following: • Internal security holes • Broken or poorly structured paths in the coding processes • The flow of specific inputs through the code • Expected output • The functionality of conditional loops • Testing of each statement, object and function on an individual basis
  • 24. Advantages • Testing can be commenced at an earlier stage. One need not wait for the GUI to be available. • Testing is more thorough, with the possibility of covering most paths. Disadvantages • Since tests can be very complex, highly skilled resources • Since tests can be very complex, highly skilled resources are required, with thorough knowledge of programming and implementation. • Test script maintenance can be a burden if the implementation changes too frequently. • Since this method of testing it closely tied with the application being testing, tools to cater to every kind of implementation/platform may not be readily available.
  • 25. Difference between Black box and White box testing
  • 26.
  • 27. White box testing 1. Basis Path Testing • Flow Graph Notation • Independent Program Paths • Deriving Test Cases Graph Matrices • Graph Matrices 2. Control Structure Testing • Condition Testing • Data Flow Testing • Loop Testing
  • 28. Flow Graph Notation • A simple notation for the representation of control flow, called a flow graph is introduced bellow.
  • 29. • Each circle, called a flow graph node, represents one or more procedural statements. • A sequence of process boxes and a decision diamond can map into a single node. • The arrows on the flow graph, called edges or links, represent flow of control and are analogous links, represent flow of control and are analogous to flowchart arrows. • An edge must terminate at a node, even if the node does not represent any procedural statements. • Areas bounded by edges and nodes are called regions.
  • 30.
  • 31. compound condition occurs when one or more Boolean operators (logical OR, AND, NAND, NOR) is present in a conditional statement. Referring to Figure the program design language (PDL) segment translates into the flow graph shown. Note that a separate node is created for each of the conditions a and b in the statement IF a OR b. Each node that contains a condition is called a predicate node and is characterized by two or more edges emanating from it
  • 32. Independent Program Paths Cyclomatic Complexity • Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a program. • When used in the context of the basis path testing • When used in the context of the basis path testing method, the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program and provides us with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once
  • 34. Independent Program Paths Cyclomatic Complexity • Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a program. • When used in the context of the basis path testing • When used in the context of the basis path testing method, the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program and provides us with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once
  • 35. • An independent path is any path through the program that introduces at least one new set of processing statements or a new condition. When stated in terms of a flow graph, an independent path must move along at least one edge that has not been traversed before the path is defined. For example, a set of independent paths for the flow example, a set of independent paths for the flow graph is • path 1: 1-11 • path 2: 1-2-3-4-5-10-1-11 • path 3: 1-2-3-6-8-9-10-1-11 • path 4: 1-2-3-6-7-9-10-1-11
  • 36. Cyclomatic complexity has a foundation in graph theory and provides us with extremely useful software metric. Complexity is computed in one of three ways: 1. The number of regions of the flow graph correspond to the cyclomatic complexity. 2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as 2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G) = E - N + 2 where E is the number of flow graph edges, N is the number of flow graph nodes. 3. Cyclomatic complexity, V(G), for a flow graph, G, is also defined as V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G.
  • 37. 1. The flow graph has four regions. 2. V(G) = 11 edges 9 nodes + 2 = 4. 3. V(G) = 3 predicate nodes + 1 = 4.
  • 38. Deriving Test Cases The basis path testing method can be applied to a procedural design or to source code. In this section, we present basis path testing as a series of steps. The procedure average, depicted in PDL in figure below, will be used as an example to illustrate each step in the test case design method. Note that average, although an extremely simple algorithm, contains compound conditions and loops conditions and loops • Using the design or code as a foundation, draw a corresponding flow graph. • Determine the cyclomatic complexity of the resultant flow graph • Determine a basis set of linearly independent paths. • Prepare test cases that will force execution of each path in the basis set.
  • 39. Graph Matrices • To develop a software tool that assists in basis path testing, a data structure, called a graph matrix, can be quite useful. • A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of number of rows and columns) is equal to the number of nodes on the flow graph. • Each row and column corresponds to an identified node, and matrix entries correspond to connections (an edge) between nodes. A simple example of a flow graph and its corresponding graph matrix is shown in figure.
  • 40.
  • 41. CONTROL STRUCTURE TESTING • Condition Testing • Data Flow Testing • Loop Testing The basis path testing technique is one of a number of The basis path testing technique is one of a number of techniques for control structure testing. Although basis path testing is simple and highly effective, it is not sufficient in itself. Other variations on control structure testing helps in broadening testing coverage and improve quality of white- box testing.
  • 42. Condition Testing Condition testing is a test-case design method that exercises the logical conditions contained in a program module. A simple condition is a Boolean variable or a relational expression, possibly preceded with one NOT (¬) operator. A relational expression takes the form E1 <relational-operator> E2 where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following: <, ≤, =, ≠ (nonequality), >, or ≥. A compound condition is composed of two or more simple conditions, Boolean operators, and parentheses.
  • 43. Data Flow Testing The data flow testing method selects test paths of a program according to the locations of definitions and uses of variables in the program. For a statement with S as its statement number, DEF(S) = {X | statement S contains a definition of X} USE(S) = {X | statement S contains a use of X}
  • 44. Loop Testing Loop testing is a white-box testing technique that focuses exclusively on the validity of loop constructs. Four different classes of loops [Bei90] can be defined: simple loops, concatenated loops, nested loops, and unstructured loops as shown in Figure.
  • 45. Classes of Loops Simple loops. The following set of tests can be applied to simple loops, where n is the maximum number of allowable passes through the loop. 1. Skip the loop entirely. 1. Skip the loop entirely. 2. Only one pass through the loop. 3. Two passes through the loop. 4. m passes through the loop where m < n. 5. n -1, n, n + 1 passes through the loop.
  • 46. Nested loops Beizer suggests an approach that will help to reduce the number of tests: 1. Start at the innermost loop. Set all other loops to minimum values. 2. Conduct simple loop tests for the innermost loop while 2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or excluded values. 3. Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum values and other nested loops to ―typical values. 4. Continue until all loops have been tested.
  • 47. Concatenated loops Concatenated loops can be tested using the approach defined for simple loops, if each of the loops is independent of the other. Unstructured loops Unstructured loops Class of loops should be redesigned to reflect the use of the structured programming constructs.
  • 48. Concatenated loops Concatenated loops can be tested using the approach defined for simple loops, if each of the loops is independent of the other. Unstructured loops Unstructured loops Class of loops should be redesigned to reflect the use of the structured programming constructs.
  • 49. BLACK-BOX TESTING Black-box testing, also called behavioural testing, focuses on the functional requirements of the software. That is, black-box testing techniques enable to derive sets of input conditions that will fully exercise all functional requirements for a program. Black-box testing attempts to find errors in the following categories: (1) Incorrect or missing functions, (2) Interface errors (3) Errors in data structures or external database access, (4) Behaviour or performance errors (5) Initialization and termination errors
  • 50. Types of Black box testing: • Random Testing • Equivalence class partitioning • Boundary value analysis • Graph based testing • Orthogonal array testing
  • 51. Random Testing Random testing is a black-box software testing technique where programs are tested by generating random, independent inputs. Results of the output are compared against software specifications to verify that the test output is pass or fail. • It eliminates exhaustive testing. • It eliminates exhaustive testing. • Random testing can save time and effort.
  • 52. Equivalence class partitioning • Equivalence partitioning (EP) is a specification-based or black-box technique. • It can be applied at any level of testing and is often a good technique to use first. • The idea behind this technique is to divide (i.e. to • The idea behind this technique is to divide (i.e. to partition) a set of test conditions into groups or sets that can be considered the same (i.e. the system should handle them equivalently), hence ‘equivalence partitioning’. • Equivalence partitions are also known as equivalence classes – the two terms mean exactly the same thing.
  • 54. Boundary Value Analysis A greater number of errors occurs at the boundaries of the input domain rather than in the “center.” It is for this reason that boundary value analysis (BVA) has been developed as a testing technique. Boundary value analysis leads to a selection of test cases that exercise bounding values Boundary value analysis is a test-case design technique that complements equivalence partitioning. Rather than that complements equivalence partitioning. Rather than selecting any element of an equivalence class, BVA Leads to the selection of test cases at the “edges” of the class.
  • 55.
  • 56. Graph-Based Testing Methods The first step in black-box testing is to understand the objects that are modelled in software and the relationships that connect these objects
  • 57. The symbolic representation of a graph is shown • Nodes are represented as circles connected by links that take a number of different forms. • A directed link (represented by an arrow) indicates that a relationship moves in only one direction. • A bidirectional link, also called a symmetric link, implies • A bidirectional link, also called a symmetric link, implies that the relationship applies in both directions. • Parallel links are used when a number of different relationships are established between graph nodes.
  • 58. Orthogonal Array Testing Orthogonal array testing can be applied to problems in which the input domain is relatively small but too large to accommodate exhaustive testing. The orthogonal array testing method is particularly useful in finding region faults—an error category associated with faulty logic within a software component.
  • 59. Regression testing • Regression testing is the re execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects. • Regression testing is the activity that helps to ensure that changes do not introduce unintended behavior or additional errors. • Each new addition or change to baselined software may cause problems with functions that previously worked flawlessly problems with functions that previously worked flawlessly • Regression testing re-executes a small subset of tests that have already been conducted • Ensures that changes have not propagated unintended side effects • Helps to ensure that changes do not introduce unintended behavior or additional errors • May be done manually or through the use of automated capture/playback tools
  • 60. Regression testing Regression test suite contains three different classes of test cases • A representative sample of tests that will exercise all software functions • Additional tests that focus on software functions that are likely to be affected by the change • Tests that focus on the actual software components that have been changed • Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to reexecute every test for every program function once a change has occurred.
  • 61. Unit Testing • Unit testing focuses verification effort on the smallest unit of software design—the software component or module. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. • The unit test focuses on the internal processing logic and data structures within the boundaries of a component. This type of testing can be conducted in parallel for multiple components.
  • 62.
  • 63. Unit-test considerations • The module interface is tested to ensure that information properly flows into and out of the program unit under test. • Local data structures are examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm‘s execution. • All independent paths through the control structure are exercised to ensure that all statements in a module have been exercised to ensure that all statements in a module have been executed at least once. • Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. And finally, all error-handling paths are tested. • Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow.
  • 64. Unit-test procedures • Unit testing is normally considered as an adjunct to the coding step. • The design of unit tests can occur before coding begins or after source code has been generated. • A review of design information provides guidance for establishing test cases that are likely to uncover errors. • A component is not a stand-alone program; driver and/or stub software must often be developed for each unit test. • The unit test environment is illustrated in Figure. In most applications a driver is nothing more than a ―main program‖ that accepts test a driver is nothing more than a ―main program‖ that accepts test case data, passes such data to the component (to be tested), and prints relevant results. • A stub or ―dummy subprogram‖ uses the subordinate module‘s interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing. • Unit testing is simplified when a component with high cohesion is designed. • When only one function is addressed by a component, the number of test cases is reduced and errors can be more easily predicted and uncovered.
  • 65. Integration Testing • Integration testing is a systematic technique for constructing the software architecture while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit-tested components and build a program structure that has been dictated by design. • There is often a tendency to attempt nonincremental integration; that is, to construct the program using a ―big bang approach. All components are combined in advance. The entire approach. All components are combined in advance. The entire program is tested as a whole. A set of errors is encountered. Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless loop. • The program is constructed and tested in small increments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.
  • 66. Top-down integration • Top-down integration testing is an incremental approach to construction of the software architecture. • Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). • Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner. • Depth-first integration integrates all components on a major control path of the program structure. • Selection of a major path is arbitrary and depends on application-specific characteristics
  • 67. • Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally. • The integration process is performed in a series of five steps:
  • 68. Top-down integration 1. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module. 2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components. 3. Tests are conducted as each component is integrated. 4. On completion of each set of tests, another stub is replaced with the real component. 5. Regression testing may be conducted to ensure that new errors have not been introduced. The top-down integration strategy verifies major control or decision points early in the test process.
  • 69. Bottom-up integration: Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure). Because components are integrated from the bottom up, the functionality provided by components subordinate to a given level is always available and the need for stubs is eliminated. A bottom-up integration strategy may be implemented with the following steps: following steps: 1. Low-level components are combined into clusters (sometimes called builds) that perform a specific software subfunction. 2. A driver (a control program for testing) is written to coordinate test case input and output. 3. The cluster is tested. 4. Drivers are removed and clusters are combined moving upward in the program structure.
  • 70.
  • 71. Integration test approaches There are four types of integration testing approaches. Those approaches are the following 1. Big-Bang Integration Testing It is the simplest integration testing approach, where all the modules are combining and verifying the functionality after modules are combining and verifying the functionality after the completion of individual module testing. In simple words, all the modules of the system are simply put together and tested. This approach is practicable only for very small systems. If once an error is found during the integration testing, it is very difficult to localize the error as the error may potentially belong to any of the modules being integrated. So, debugging errors reported during big bang integration testing are very expensive to fix.
  • 72. Bottom-Up Integration Testing In bottom-up testing, each module at lower levels is tested with higher modules until all modules are tested. The primary purpose of this integration testing is, each subsystem is to test the interfaces among various modules making up the subsystem. This integration testing uses test drivers to drive and pass appropriate data to the lower level modules. Top-Down Integration Testing Top-down integration testing technique used in order to simulate the behaviour of the lower-level modules that are not yet integrated. In this integration testing, testing takes place from top to bottom. First high-level modules are tested and then low-level modules and finally integrating the low-level modules to a high level to ensure the system is working as intended
  • 73. Mixed Integration Testing A mixed integration testing is also called sandwiched integration testing. A mixed integration testing follows a combination of top down and bottom-up testing approaches. In top-down approach, testing can start only after the top-level module have been coded and unit tested. after the top-level module have been coded and unit tested. In bottom-up approach, testing can start only after the bottom level modules are ready. This sandwich or mixed approach overcomes this shortcoming of the top-down and bottom-up approaches. A mixed integration testing is also called sandwiched integration testing.
  • 74. VALIDATION TESTING • Validation testing begins at the culmination of integration testing, when individual components have been exercised, the software is completely assembled as a package, and interfacing errors have been uncovered and corrected. • At the validation or system level, the distinction between conventional software, object-oriented software, and WebApps disappears. Testing focuses on user-visible actions and user-recognizable output from the system. • If a Software Requirements Specification has been developed, it describes all user-visible attributes of the software and contains a Validation Criteria section that forms the basis for a validation-testing approach
  • 75. Software validation is achieved through a series of tests that demonstrate conformity with requirements. • A test plan outlines the classes of tests to be conducted, and a test procedure defines specific test cases that are designed to ensure that all functional requirements are satisfied, all behavioral characteristics are achieved, all content is accurate and properly presented, all performance requirements are attained, documentation is correct, and usability and other attained, documentation is correct, and usability and other requirements are met (e.g., trans-portability, compatibility, error recovery, maintainability). • After each validation test case has been conducted, one of two possible conditions exists: (1) The function or performance characteristic conforms to specification and is accepted or (2) a deviation from specification is uncovered and a deficiency list is created. Deviations or errors discovered at this stage in a project can rarely be corrected prior to scheduled delivery.
  • 76. Configuration Review An important element of the validation process is a configuration review. The intent of the review is to ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary detail to bolster the support activities. The configuration review, sometimes called an audit. Alpha and Beta Testing Alpha and Beta Testing Most software product builders use a process called alpha and beta testing to uncover errors that only the end user seems able to find. Alpha Testing The alpha test is conducted at the developer‘s site by a representative group of end users. The software is used in a natural setting with the developer ―looking over the shoulder‖ of the users and recording errors and usage problems. Alpha tests are conducted in a controlled environment.
  • 77. Beta Testing • The beta test is conducted at one or more end-user sites. Unlike alpha testing, the developer generally is not present. Therefore, the beta test is a “live” application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the encountered during beta testing and reports these to the developer at regular intervals. As a result of problems reported during beta tests, you make modifications and then prepare for release of the software product to the entire customer base. • A variation on beta testing, called customer acceptance testing, is sometimes performed when custom software is delivered to a customer under contract. The customer performs a series of specific tests in an attempt to uncover errors before accepting the software from the developer
  • 78.
  • 79. SYSTEM TESTING System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Various types of system tests are • Recovery Testing • Security Testing • Stress Testing • Performance Testing • Deployment Testing
  • 80. Recovery Testing Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data recovery, and restart are evaluated for correctness. Security Testing Security Testing • Security testing attempts to verify that protection mechanisms built into a system, in fact, protect it from improper penetration. • During security testing, the tester plays the role(s) of the individual who desires to penetrate the system. • The role of the system designer is to make penetration cost more than the value of the information that will be obtained.
  • 81. Stress Testing • Stress tests are designed to confront programs with abnormal situations. • Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. • A variation of stress testing is a technique called sensitivity testing. • Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing. processing. Performance Testing • Performance testing is designed to test the run-time performance of software within the context of an integrated system. • Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation. That is, it is often necessary to measure resource utilization
  • 82. Deployment Testing Deployment testing, sometimes called configuration testing, exercises the software in each environment in which it is to operate. In addition, deployment testing examines all installation procedures and specialized installation software (e.g., ―installers‖) that will be used by customers, and all documentation that will be used to customers, and all documentation that will be used to introduce the software to end users. DEBUGGING Debugging occurs as a consequence of successful testing. That is, when a test case uncovers an error, debugging is the process that results in the removal of the error.
  • 83.
  • 84. The Debugging process The debugging process begins with the execution of a test case. Results are assessed and a lack of correspondence between expected and actual performance is encountered. The debugging process will usually have one of two The debugging process will usually have one of two outcomes: (1) The cause will be found and corrected or (2) The cause will not be found. In the latter case, the person performing debugging may suspect a cause, design a test case to help validate that suspicion, and work toward error correction in an iterative fashion
  • 85. Why is debugging so difficult? 1. The symptom and the cause may be geographically remote. 2. The symptom may disappear (temporarily) when another error is corrected. 3. The symptom may actually be caused by nonerrors (e.g., round-off inaccuracies). 4. The symptom may be caused by human error that is not easily traced. 5. The symptom may be a result of timing problems, rather than 5. The symptom may be a result of timing problems, rather than processing problems. 6. It may be difficult to accurately reproduce input conditions (e.g., a real-time application in which input ordering is indeterminate). 7. The symptom may be intermittent. This is particularly common in embedded systems that couple hardware and software inextricably. 8. The symptom may be due to causes that are distributed across a number of tasks running on different processors.
  • 86. Debugging Strategies In general, three debugging strategies have been proposed (1) brute force, (2) backtracking, and (3) cause elimination. • Each of these strategies can be conducted manually, but modern debugging tools can make the process much modern debugging tools can make the process much more effective. Debugging tactics. • The brute force category of debugging is probably the most common and least efficient method for isolating the cause of a software error.
  • 87. Backtracking is a fairly common debugging approach that can be used successfully in small programs. Beginning at the site where a symptom has been uncovered, the source code is traced backward (manually) until the cause is found. Unfortunately, as the number of source lines increases, the number of potential backward paths may become unmanageably large. The third approach to debugging—cause elimination—is The third approach to debugging—cause elimination—is manifested by induction or deduction and introduces the concept of binary partitioning. Automated debugging: • Each of these debugging approaches can be supplemented with debugging tools. • A wide variety of debugging compilers, dynamic debugging aids (―tracers‖), automatic test-case generators, and cross- reference mapping tools are available.
  • 88. Correcting the Error Once a bug has been found, it must be corrected. Van Vleck suggests three simple questions that you should ask before making the ―correction‖ that removes the cause of a bug: • Is the cause of the bug reproduced in another part of the • Is the cause of the bug reproduced in another part of the program? • What ―next bug‖ might be introduced by the fix I‘m about to make? • What could we have done to prevent this bug in the first place?
  • 89. Correcting the Error Once a bug has been found, it must be corrected. Van Vleck suggests three simple questions that you should ask before making the ―correction‖ that removes the cause of a bug: • Is the cause of the bug reproduced in another part of the • Is the cause of the bug reproduced in another part of the program? • What ―next bug‖ might be introduced by the fix I‘m about to make? • What could we have done to prevent this bug in the first place?
  • 90. Software Implementation Techniques After detailed system design we get a system design which can be transformed into implementation model. The goal coding is to implement the design in the best possible manner. Coding affects both testing and maintenance very deeply. The coding should be done in. such a manner that the instead of getting the job of programmer simplified the task of testing and maintenance phase should get simplified. Various objectives of coding are – Various objectives of coding are – 1. Programs developed in coding should be readable. 2. They should execute efficiently. 3. The program should utilize less amount of memory. 4. The programs should not be lengthy If the objectives are clearly specified before the programmers then while coding they try to achieve the specified objectives. To achieve these objectives some programming principles must be followed.
  • 91. Coding Practices There are some commonly used programming practices that help in avoiding the common errors. These are enlisted below – 1. Control construct The single entry and single exit constructs need to be used. The standard control constructs must be used instead of using wide variety of controls. 2. Use of gotos The goto statements make the program unstructured and it also imposes overhead on compilation process. Hence avoid use of goto statements as far as possible and another alternative must be thought of. 3. Information hiding Information hiding should be supported as far as possible. In that case only access functions to the data structures must be made visible and the information present in it must be hidden.
  • 92. 4. Nesting Nesting means defining one structure inside another. If the nesting is too deep then it becomes hard to understand the code. Hence as far as possible - avoid deep nesting of the code. 5. User defined data types Modern programming languages allow the user to use defined data types as the enumerated types. Use of user defined data types enhances the readability of the code. types enhances the readability of the code. 6. Module size There is no standard rule about the size of the module but the large size of the module will not be functionally cohesive. 7. Module interface Complex module interface must be carefully examined. A simple rule of thumb is that the module interface with more than five parameters must be broken into multiple modules with simple interface.
  • 93. 8. Side effects Avoid obscure side effects. If some part of the code is changed randomly then it will cause some side effect. For example if number of parameters passed to the function is changed then it will be difficult to understand the purpose of that function. 9. Robustness The program is said to robust if it does something even though some unexceptional condition occurs. In such situations the programs do not crash but it exits gracefully. 10. Switch case with defaults 10. Switch case with defaults The choice being passed to the switch case statement may have some unpredictable value, and then the default case will help to execute the switch case statement without any problem. Hence it is a good practice to always have default case in the switch statement. 11. Empty catch block If the exception is caught but if there is no action then it is not a good practice. Therefore take some default action even if it is just writing some print statement, whenever the exception is caught
  • 94. 12. Empty if and while statements In the if and while statements some conditions are checked. Hence if we write some empty block on checking these conditions then those checks are proved to be useless checks. Such useless checks should be avoided. 13. Check for read return Many times the return values that we obtain from the read functions are not checked because we blindly believe that functions are not checked because we blindly believe that the desired result is present in the corresponding variable when the read function is performed. But this may cause some serious errors. hence the return value must be checked for the read operation. 14. Return from Finally Block The return value must come from finally block whenever it is possible. This helps in distinguishing the data values that are returning from the try-catch statements
  • 95. 15. Trusted Data sources Counter check should be made before accessing the input data. For example while reading the data from the file one must check whether the data accessed is NULL or not. 16. Correlated Parameters Many often there occurs co-relation between the data items. It is a good practice to check these co-relations before performing any operation on those data items. 17. Exceptions Handling If due to some input condition if the program does not follow the main path and follows an exceptional path. In such a situation, an exceptional path may get followed. In order to make the software more reliable, it necessary to write the code for execution.
  • 96. Coding Standards Any good software development approach suggests to adhere to some well-defined standards or rules for coding. These rules are called coding standards. 1. Naming Conventions 1. Following are some commonly used naming conventions in the coding 2. Package name and variable names should be in lower case. 3. Variable names must not begin with numbers. 3. Variable names must not begin with numbers. 4. The type name should be noun and it should start with capital letter. 5. Constants must be in upper case (For example PI, SIZE) 6. Method name must be given in lower case. 7. The variables with large scope must have long name. For example count_total, sum, Variables with short scope must have short name. For example i,j. 8. The prefix is must be used for Boolean type of variables. For example isEmpty or isFull
  • 97. 2. Files Reader must get an idea about the purpose of the file by its name. In some programming language like Java – The file extension must be Java. The name of the file and the class defined in the file must have the same Line length in the file must be limited to 80 characters. 3. Commenting/Layout Comments are non-executable part of the code. But it is very important because it enhances the readability of the code. The purpose of the code is to explain the logic of the program. Single line comments must be given by // For the names of the variables comments must be given. A black of comment must be enclosed within /* and */.
  • 98. 4. Statements • There are some guidelines about the declaration and executable statements. • Declare some related variables on same line and unrelated variables on another line. • Class variable should never be declared public. • Class variable should never be declared public. • Make use of only loop control within the for loop. • Avoid make use of break and continue statements in the loop. • Avoid complex conditional expressions. Make use of temporary variables instead. • Avoid the use of do...while statement.
  • 99. Code refactoring • Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior. Refactoring improves non functional attributes of the software. Advantages include improved code readability and reduced complexity to improve source code maintainability, and create a more expressive internal code maintainability, and create a more expressive internal architecture or object model to improve extensibility. • By continuously improving the design of code, we make it easier and easier to work with. This is in sharp contrast to what typically happens: little refactoring and a great deal of attention paid to expediently adding new features. If you get into the hygienic habit of refactoring continuously, you'll find that it is easier to extend and maintain code.
  • 100. There are two general categories of benefits to the activity of refactoring. 1. Maintainability. It is easier to fix bugs because the source code is easy to read and the intent of its author is easy to grasp. This might be achieved by reducing large monolithic routines into a set of individually concise, well- named, single-purpose methods. It might be achieved by named, single-purpose methods. It might be achieved by moving a method to a more appropriate class, or by removing misleading comments. 2. Extensibility. It is easier to extend the capabilities of the application if it uses recognizable design patterns, and it provides some flexibility where none before may have existed.
  • 101. Refactoring is needed due to various reasons. They are as follows: • Lack of Modularity – Existing feature of one application can’t be used in another application due to its tight coupling with the application components. • Lack of Reusable Components – There are instances of code duplicity and potential reusable components dependency on application code. • Lack of Pluggable Components – Existing components are not • Lack of Pluggable Components – Existing components are not easily replaceable due to its application code tightly coupled with the components. • Service Oriented Architecture – Scope for SOA components where each component can work as a service and reusable. • Code Redundancy – Application has lots of dead code and duplicate code. • Lack of Layered Architecture – Any change in one layer causes changes in all other layers.
  • 102. • Poor Coding Style – Coding standards has not been followed properly which includes improper names to objects/methods, accessing the fields without getter/setters. • Illogical Methods Composition – Illogical grouping of methods in one class. • Improper Packaging – Artifacts are placed in the application code which can be kept at other locations; forcing developer to change the jars in each of the forcing developer to change the jars in each of the application manually instead of updating it at a centered location. • Use of Old Version of Third Party Application/Jars – Application is using older version of software’s instead of using latest version and hence new features can’t be used and explored in the application.
  • 103. Steps for Refactoring: 1. Writing unit test cases – Test cases should be written to test the application behavior and ensure that it is unchanged after every cycle of refactoring. 2. Identifying the task for refactoring – Identify the set of tasks for performing the refactoring. 3. Find the problem – Find out if there is any issue with the current piece of code and if yes specify what is the problem? 4. Evaluate/Analyze the problem – After finding out the potential 4. Evaluate/Analyze the problem – After finding out the potential problem evaluate it against the risks involved and the benefits. 5. Design solution – Find out what will be the resultant code after refactoring of the code. Design solution which leads from current state to the target state. 6. Modify the code – Refactor the code without changing the outer behavior of the code. 7. Test refactored code if fails rollback to the last smaller change and repeat the refactoring in a different way. 8. Repeat above cycle until the current code moves to the target state.
  • 104. Risks Involved in Refactoring • If there is a requirement for code change then refactoring occurs. The main risk of refactoring is that existing working code may break due to the changes being made. This is the main reason why most often refactoring is not done. In order to mitigate or avoid this risk following two rules must be followed: be followed: • Rule 1: Re-factor in small steps. • Rule 2: For testing the existing functionalities make use of test scripts. • By following these two rules the bugs in the refactoring can be easily identified and corrected. In each refactoring only small change is made but the series of refactoring makes a significant transformation in the program structure
  • 105. Key Benefits from Refactoring • Improves software expendability • Reduces maintenance cost • Provides standardized code • Architecture improvement without impacting software behavior behavior • Provides more readable and modular code • Refactored modular component increases potential reusability.
  • 106. Software maintenance • “The modification of a software product after delivery to correct faults, to improve performance or other attributes, or to adapt the product to a modified environment.” • Modifying a program after it has been put into use, after delivery to customer. • Maintenance does not normally involve major changes to the system’s architecture • Maintenance is often contracted out/ outsourced • Maintenance is often contracted out/ outsourced There are number of reasons, why modifications are required, some of them are briefly mentioned below: • Market Conditions • Client Requirements • Host Modifications • Organization Changes
  • 107. Types of software maintenance Corrective maintenance • Impractical to exhaustively test a large software system. Therefore reasonable to assume that any large system will have errors. have errors. • Testing should be thorough for common cases, so errors likely to be obscure. • The process of receiving reports of such errors, diagnosing the problem, and fixing it is called "corrective maintenance"
  • 108. Adaptive maintenance • Systems don't function in isolation. • Typically they may interact with operating systems, DBMS's, GUI's, network protocols, other external software packages, and various hardware platforms • In the IT industry any or all of these may change over a • In the IT industry any or all of these may change over a very short period (typically six months) • The process of assessing the effects of such "environmental changes" on a software system, and then modifying the system to cope with those changes is known as "adaptive maintenance
  • 109. Perfective maintenance • Users and marketers are never satisfied • Even if a system is wildly successful, someone will want new or enhanced features added to it • Sometimes there will also be impetus to alter the way a certain component of the system works, or its interface certain component of the system works, or its interface • The process of receiving suggestions and requests for such enhancements or modifications, evaluating
  • 110. Preventative maintenance • Sometimes changes are needed for entirely internal reasons • Such changes have no direct discernible effect on the user, but lay the groundwork for easier maintenance in the future • Alternatively, such changes may improve reliability, or • Alternatively, such changes may improve reliability, or provide a better basis for future development • The process of planning such code reorganizations, implementing them, and testing to ensure that they have no adverse impact is known as "preventative maintenance"
  • 111. Cost of Maintenance Reports suggest that the cost of maintenance is high. A study on estimating software maintenance found that the cost of maintenance is as high as 67% of the cost of entire software process cycle. On an average, the cost of software maintenance is more than 50% of all SDLC phases.
  • 112. Typical problems with maintenance • Inadequate documentation of software evolution • Inadequate documentation of software design and structure • Loss of "cultural" knowledge of software due to staff turnover • Lack of allowance for change in original software design • Lack of allowance for change in original software design • Maintenance is unglamorous and may be viewed as a "punishment task" Software-end factors affecting Maintenance Cost • Structure of Software Program • Programming Language • Dependence on external environment • Staff reliability and availability
  • 113. Maintenance Activities IEEE provides a framework for sequential maintenance process activities. It can be used in iterative manner and can be extended so that customized items and processes can be included.
  • 114. • Identification & Tracing - It involves activities pertaining to identification of requirement of modification or maintenance. It is generated by user or system may itself report via logs or error messages. Here, the maintenance type is classified also. • Analysis - The modification is analyzed for its impact on the system including safety and security implications. If probable impact is severe, alternative solution is looked for. A set of required modifications is then materialized into requirement specifications. The cost of modification/maintenance is analyzed specifications. The cost of modification/maintenance is analyzed and estimation is concluded. • Design - New modules, which need to be replaced or modified, are designed against requirement specifications set in the previous stage. Test cases are created for validation and verification. • Implementation - The new modules are coded with the help of structured design created in the design step.Every programmer is expected to do unit testing in parallel.
  • 115. • System Testing - Integration testing is done among newly created modules. Integration testing is also carried out between new modules and the system. Finally the system is tested as a whole, following regressive testing procedures. • Acceptance Testing - After testing the system internally, it is tested for acceptance with the help of users. If at this state, user complaints some issues they are addressed or noted to address in next iteration. • Delivery - After acceptance test, the system is deployed all over • Delivery - After acceptance test, the system is deployed all over the organization either by small update package or fresh installation of the system. The final testing takes place at client end after the software is delivered. Training facility is provided if required, in addition to the hard copy of user manual. • Maintenance management - Configuration management is an essential part of system maintenance. It is aided with version control tools to control versions, semi-version or patch management.
  • 116. Reengineering: • The process typically encompasses a combination of other processes such as reverse engineering, re-documentation, restructuring, translation, and forward engineering. • For example, initially Unix was developed in assembly language. When language C came into existence, Unix was re-engineered in C, because working in assembly language was difficult. • Other than this, sometimes programmers notice that few parts of software need more maintenance than others and they also need re- engineering. engineering.
  • 117. Business Process Reengineering (BPR) Also known as process innovation and core process redesign - attempts to restructure or obliterate unproductive management layers, wipe out redundancies, and remodel processes differently. Business process re-engineering refers to the analysis, control and development of a company’s systems and workflow
  • 118. Define Objectives and Framework: • First of all, the objective of re-engineering must be defined in the quantitative and qualitative terms. The objectives are the end results that the management desires after the reengineering. • Once the objectives are defined, the need for change should be well communicated to the employees because, the success of BPR depends on the readiness of the employees to accept the change. Identify Customer Needs: • While, redesigning the business process the needs of the customers must be taken into prior consideration. The process customers must be taken into prior consideration. The process shall be redesigned in such a way that it clearly provides the added value to the customer. One must take the following parameters into the consideration: • Customer’s expected utilities in product and services • Customer requirements, buying habits and consuming tendencies. • Customer problems and expectations about the product or service.
  • 119. Study the Existing Process: Before deciding on the changes to be made in the existing business process, one must analyze it carefully. The existing process provides a base for the new process and hence “what” and “why” of the new process can be well designed by studying the right and wrongs of the existing business plan. Formulate a Redesign Business Plan: Once the existing business process is studied thoroughly, the required Once the existing business process is studied thoroughly, the required changes are written down on a piece of paper and is converted into an ideal re-design process. Here, all the changes are chalked down, and the best among all the alternatives is selected. Implement the Redesign: Finally, the changes are implemented into the redesign plan to achieve the dramatic improvements. It is the responsibility of both the management and the designer to operation alise the new process and gain the support of all. Thus, the business process reengineering is collection of interrelated tasks or activities designed to accomplish the specified outcome.
  • 120. Re-Engineering Process • Decide what to re-engineer. Is it whole software or a part of it? • Perform Reverse Engineering, in order to obtain specifications of existing software. • Restructure Program if required. For example, changing function-oriented programs into object-oriented programs. function-oriented programs into object-oriented programs. • Re-structure data as required. • Apply Forward engineering concepts in order to get re- engineered software. .
  • 122. Code restructuring • Source code is analyzed and violations of structured programming practices are noted and repaired • Revised code needs to be reviewed and tested Data restructuring Data restructuring • Usually requires full reverse engineering • Current data architecture is dissected • Data models are defined • Existing data structures are reviewed for quality
  • 123. Reverse Engineering • It is a process to achieve system specification by thoroughly analyzing, understanding the existing system. This process can be seen as reverse SDLC model, i.e. we try to get higher abstraction level by analyzing lower abstraction levels. • An existing system is previously implemented design, about which we know nothing. Designers then do reverse engineering by looking at the code and try to get the design. • With design in hand, they try to conclude the specifications. Thus, going in reverse from code to system specification.
  • 124.
  • 125. Forward Engineering • Forward engineering is a process of obtaining desired software from the specifications in hand which were brought down by means of reverse engineering. It assumes that there was some software engineering already done in the past. • Forward engineering is same as software engineering process with only one difference – it is carried out always after reverse engineering. • In forward engineering, one takes a set of primitives of interest, builds them into a working system, and then observes what the system can and cannot do.
  • 126.
  • 127. Difference between Forward Engineering and Reverse Engineering • Goal: The goal of forward engineering is to develop new software from scratch, while the goal of reverse engineering is to analyze and understand an existing software system. • Process: Forward engineering involves designing and implementing a new software system based on requirements implementing a new software system based on requirements and specifications. Reverse engineering involves analyzing an existing software system to understand its design, structure, and behavior. • Tools and Techniques: Forward engineering often involves the use of software development tools, such as IDEs, code generators, and testing frameworks. Reverse engineering often involves the use of reverse engineering tools, such as decompilers, disassemblers, and code analyzers.
  • 128. Difference between Forward Engineering and Reverse Engineering • Focus: Forward engineering focuses on the creation of new code and functionality, while reverse engineering focuses on understanding and documenting existing code and functionality. • Output: The output of forward engineering is a new software system, while the output of reverse engineering is documentation of an existing software system, such as a UML diagram, flowchart, or software specification.
  • 129. Software Reviews • A review is any activity in which a work product is exposed to reviewers to examine and give feedback. • Purpose is to find defects (errors) before they are passed on to another software engineering passed on to another software engineering activity or released to the customer. • Software engineers (and others) conduct formal technical reviews (FTR) • Using formal technical reviews is an effective means for improving software quality.
  • 130. Review Roles • Presenter (designer/producer). • Coordinator • Recorder • records events of meeting • records events of meeting • builds paper trail • Reviewers • maintenance experts • standards bearer • user representative • others
  • 131. Formal Technical Reviews • Involves 3 to 5 people (including reviewers) • Advance preparation (no more than 2 hours per person) required • Duration of review meeting should be less than 2 • Duration of review meeting should be less than 2 hours • Focus of review is on a discrete work product • Review leader organizes the review meeting at the producer's request.
  • 132. Formal Technical Reviews • Reviewers ask questions that enable the producer to discover his or her own error (the product is under review not the producer) • Producer of the work product walks the reviewers through the product through the product • Recorder writes down any significant issues raised during the review • Reviewers decide to accept or reject the work product or to go for additional reviews of the product.  All work products and components like SRS, schedules, design documents, code, test plans, test cases, and defect reports should be reviewed.
  • 133. Why Reviews? • To improve the quality. • Catch 80% of all errors if done properly. • Catch both coding and design errors. • Enforce the spirit of any organization standards. • Training and insurance. • Training and insurance. • Useful not only for finding and eliminating defects, but also for gaining consensus among the project team, securing approval from stakeholders, and aiding in professional development for team members. • Help find defects soon after they are injected making them cost less to fix than they would cost if they were found in test.
  • 134. Types of Review: Inspections • Inspections are moderated meetings in which reviewers list all issues and defects they have found in the document and log them so that they can be addressed by the author. can be addressed by the author. • The goal of the inspection is to repair the defects so that everyone on the inspection team can approve the work product. • Commonly inspected work products include software requirements specifications and test plans.
  • 135.
  • 136. Types of Review: Inspections • Running an inspection meeting: 1. A work product is selected for review and a team is gathered for an inspection meeting to review the work product. 2. A moderator is chosen to moderate the meeting. 3. Each inspector prepares for the meeting by reading the work product and noting each defect. product and noting each defect. 4. In an inspection, a defect is any part of the work product that will keep an inspector from approving it. 5. Discussion is focused on each defect, and coming up with a specific resolution. • Job of the inspection team is not just identify the problems; but also come up with the solutions. 6. The moderator compiles all of the defect resolutions into an inspection log
  • 138. Types of Review: Deskchecks • A deskcheck is a simple review in which the author of a work product distributes it to one or more reviewers. The author sends a copy of the work product • The author sends a copy of the work product to selected project team members. The team members read it, and then write up defects and comments to send back to the author.
  • 139. Types of Review: Deskchecks • Unlike an inspection, a deskcheck does not produce written logs which can be archived with the document for later reference. • Deskchecks can be used as predecessors to • Deskchecks can be used as predecessors to inspections. • In many cases, having an author of a work product pass his work to a peer for an informal review will significantly reduce the amount of effort involved in the inspection.
  • 140. Types of Review: Walkthroughs • A walkthrough is an informal way of presenting a technical document in a meeting. • Unlike other kinds of reviews, the author runs the walkthrough: calling the meeting, inviting the reviewers, soliciting comments and ensuring that reviewers, soliciting comments and ensuring that everyone present understands the work product. • After the meeting, the author should follow up with individual attendees who may have had additional information or insights. • The document should then be corrected to reflect any issues that were raised.
  • 141. Types of Review: Code Review • A code review is a special kind of inspection in which the team examines a sample of code and fixes any defects in it. • In a code review, a defect is a block of code • which does not properly implement its requirements. • which does not function as the programmer intended. • which is not incorrect but could be improved . For example, it could be made more readable or its performance could be improved
  • 142. Types of Review: Code Review • It’s important to review the code which is most likely to have defects. • Good candidates for code review include: • A portion of the software that only one person has the expertise to maintain. expertise to maintain. • Code that implements a highly abstract or tricky algorithm • An object, library or API that is particularly difficult to work with. • Code written by inexperienced persons, or entirely new type of code, or code written in an unfamiliar language • Code which employs a new programming technique • An area of the code that will be especially catastrophic if there are defects
  • 143.
  • 144. Types of Review: Pair Programming • Pair programming is a technique in which two programmers work simultaneously at a single computer and continuously review each others’ work. Although many programmers were introduced to • Although many programmers were introduced to pair programming, it is a practice that can be valuable in any development environment. • Pair programming improves the organization by ensuring that at least two programmers are able to maintain any piece of the software.
  • 145.
  • 146. Types of Review: Pair Programming • Pair programming works best if the pairs are constantly rotated. Prefer to pair a junior person with a senior for knowledge sharing. • The project manager should not try to force pair programming on the team; it helps to introduce the programming on the team; it helps to introduce the change slowly, and where it will meet the least resistance. • It is difficult to implement pair programming in an organization where programmers do not share the same nine-to-five work schedule. • Some people do not work well in pairs, and some pairs do not work well together.
  • 147.
  • 148.
  • 149. Relative Cost of Defect Removal
  • 150. Relative Cost of Defect Removal
  • 152.
  • 153.
  • 154. Defect Prevention/ Removal • S/w contains 200K lines • Inspection time = 7053 Hrs. • Defects prevented = 3112 • Programmer cost = 40.00 per hr. • Total cost of defect prevention = 7053 x 40.00 • Total cost of defect prevention = 7053 x 40.00 = 282120.00 • Cost per defect prevention = 2120/3112 = 91.00
  • 155. Defect Removal • Defect escaped into product = 1 per 1K • Total defects escaped = 200K/1000 = 200 Cost of removal of single defect = 25000 • Cost of removal of single defect = 25000 • Total defect removal cost = 25000 x 200 = 5000000 • Ratio of defect removal to prevention = 18
  • 156. Formality and Timing • Formal review presentations • resemble conference presentations. • Informal presentations • less detailed, but equally correct. less detailed, but equally correct. • Early Reviews • tend to be informal • may not have enough information • Later Reviews • tend to be more formal • Feedback may come too late to avoid rework
  • 157. Formality and Timing • Analysis is complete. • Design is complete. • After first compilation. • After first test run. • After first test run. • After all test runs. • Any time you complete an activity that produce a complete work product.
  • 158. Review Guidelines • Keep it short (< 30 minutes). • Don’t schedule two in a row. • Don’t review product fragments. • Use standards to avoid style disagreements. • Use standards to avoid style disagreements. • Let the coordinator run the meeting and maintain order.
  • 159.
  • 162. Scope Creep • After the programming has started, users and stakeholders make changes. • Each change is easy to describe, so it sounds “small” and the programmers agree to it. “small” and the programmers agree to it. • Eventually, the project slows to a crawl • It’s 90% done – with 90% left to go • The programmers know that if they had been told from the beginning what to build, they could have built it quickly from the start.