SlideShare a Scribd company logo
1 of 89
CHAPTER 7
Introduction To Software Testing
• Software testing is a process of identifying the correctness of
a software by considering its all attributes (Reliability,
Scalability, Portability, Re-usability, Usability) and evaluating
the execution of software components to find
the software bugs or errors or defects.
Characteristics of Software Testing
• To perform effective testing , a software team should conduct effective formal
technical reviews.
• Testing begins at the component level.
• Different Testing techniques are appropriate at different points in time.
• Testing is conducted by developer of software and independent test group( for
large project )
• Testing and Debugging are different activities , but debugging must be
accommodated in any testing strategy.
Verification and Validation
• Software testing in one element of a verification and validation. Verification refers
to the set of activities that ensure that software correctly implements a specific
function.
• Validation refers to a different set of activities that ensure that the software that
has been built is traceable to customer requirements.
• The definition of V & V encompasses many of the activities that re encompassed
by SQA.
• SQA activities include formal technical reviews, quality and configuration audits
,performance monitoring, simulation, feasibility study, documentation review ,
database review ,algorithm analysis , development testing, usability testing
qualification testing and installation testing.
6.1.1 VERIFICATION AND VALIDATION
❖ Software testing is one element of a verification and validation (v&v).verification
refers to the set of activities that ensure that the software correctly implements a
specific function.
❖Validation refers to a different set of activities that ensure that the software that has
been built is traceable to customer requirement.
❖The definition of v&v encompasses many of the activities that are encompassed by
software
quality assurance(SQA).
❖ SQA activities include formal technical reviews,
quality and configuration audits, performance
monitoring, simulation, feasibility study, documentation reviews,database reviews,algorithm
analysis,development testing,usability testing,
qualification testing, and installation testing.
6.1.2 ORGANIZING FOR SOFTWARE
TESTING
❖ For every software project, there is an inherent conflict of interest that occurs as testing begins. The
people who have built the
software are now asked to test the software.
❖ This seems harmless in itself, after all, who knows the program better than its developers?
❖ Unfortunately, these same developers have a vested interest in demonstrating that the program is
error free, that is works according to customer requirements, and that it will be completed on
schedule and within budget. Each of the interests mitigate against thorough testing.
❖ From a psychological point of view, software analysis and design (along with coding) are constructive
tasks. The software engineer analyzers, models, and then creates a computer program and its
documentation.
❖From the customer point of view of the builder, testing can be considered
to be (psychologically) destructive.
❖ so the builder treads lightly, designing and executing tests that will
demonstrate
that the program works, rather then uncovering errors. Unfortunately,
error will be present.
❖There are often a number of misconception those can be
erroneously inferred from the preceding discussion:
1. that the developer of software should do no testing at all,
❖ TEST STRATEGIES FOR
CONVENTIONAL SOFTWARE.
❖ There are many strategies that can be used to test software. At one extreme,
a software team could wait until the system is fully constructed and then
conduct test on the overall system in hopes of finding errors.
❖ At the other extreme, a software engineer could conduct tests on a daily
basis, whenever any part of the system is constructed.
❖ A testing strategy that is chosen by most software teams falls
between the two extremes.
❖ It takes an incremental view of testing, beginning
with the testing of individual program units, moving
to tests designed to facilitate the integration of the
units, and culminating with tests that exercise the constructed
system.
❖ Each of these classes of tests is described in the sections that
follow.
Unit testing
• Unit testing focuses verification effort on the smallest design i.e the software
computer or module.
• Unit testing is usually an automated process and performed within the
programmers IDE. Unit testing is an action used to validate that separate units of
source code remains working properly. Example: - A function, method, Loop or
statement in program is working fine. It is executed by the Developer.
• Unit test considerations: The tests that occur as part of unit tests are illustrated
schematically in Figure.
• Local data structures are examined to ensure that data stored temporarily
maintains its integrity during all steps in an algorithm’s execution.
• All independent paths (basis paths) through the control structure are
exercised to ensure that all statements in a module have been executed at
least once.
• Boundary conditions are tested to ensure that the module operates
property at boundaries established to limit or restrict processing. And
finally, all error handling paths are tested.
• Unit test procedures
❖Unit testing is normally considered as an adjunct to the coding step. The
design of unit tests can be performed before coding begins(a preferred
agile approach) or after source code has been generated.
❖ A review of design information provides guidance for establishing test
cases that are likely to uncover errors in each of the categories.
❖Each test case should be coupled with a set of expected result.
UNIT TEST ENVIRONMENT
• The unit test environment is illustrated in Fig.6.4
• In most applications a driver is nothing more than a “main program” that
accepts test case data, passes such data to the component (to be tested),
and prints relevant result.
• Stubs serve to replace modules that are subordinate
modules that are subordinate to (called by) the component to be tested.
• A stub or “dummy subprogram” uses the subordinate module’s interface,
may do minimal data manipulation, provides verification of entry, and
returns control to the module undergoing testing.
2.INTEGRATION TESTING
• Integration Testing is defined as a type of testing where
software modules are integrated logically and tested as a
group.A typical software project consists of multiple
software modules, coded by different programmers.
• INTEGRATION TESTING is a level of
software testing where individual units are combined
and tested as a group. The purpose of this level of testing is
to expose faults in the interaction
between integrated units. Test drivers and test stubs are
used to assist in Integration Testing.
• Integration testing is a systematic technique for
constructing software architecture while at the same time
conducting tests to uncover errors associated with
interfacing.
• The Objective is to take unit tested components and build a program structure that has
been dictated by design.
• There is often a tendency to attempt pop incremental. All components are combined in
advance. The entire program is tested as a whole.
• A set of errors is encountered. Correction is difficult because isolation of causes is
complicated by the vast expanse of the entire program. once these errors are corrected,
new ones appear and the process continues in a seemingly endless loop.
• In incremental integration the program is constructed and tested in small
increments, where errors are easier to isolate and correct , interfaces are more
likely to be tested completely, and a systematic test approach may be applied.
• A number of different incremental integration strategies are:
TOP – DOWN INTEGRATION
⮚ In Top to down approach, testing takes place from top to down following the
control flow of the software system.
⮚ Top-down integration testing is an incremental approach to construction of
the software architecture.
⮚ Modules are integrated by moving downward through the control hierarchy,
beginning with the man control module (main module) module subordinate
(and ultimtely subordinate) to the main control module are incorporated into
the structure in either a depth-first or breath-first manner.
⮚ Referring to Figure ,depth-first integration integrates all components on a
major control path of the program structure.
BOTTOM-UP INTEGRATION
⮚ In the bottom-up strategy, each module at lower levels is tested with higher modules until all
modules are tested. It takes help of Drivers for testing
⮚ Bottom-up integration testing, as its name implies, begins construction and testing
with atomic modules (i.e.,components at the lowest levels in the program structure).
⮚ because components are integrated from the bottom up, processing required for
components subordinate to a given level is always available and the need for stubs is
eliminated.
⮚ A bottom-up integration strategy may be implemented with the following steps:
1. Low-level components are combined into clusters that perform a
specific software sub function.
2. A driver is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.
• Integration follows the pattern illustrated in Fig.6.6
• Components are combined to form clusters 1,2,and 3. Each of the clusters
is tested using (shown as a dashed block).
• Components in clusters 1 and 2 are subordinate to Ma.Drivers D1 and D2
are removed and the clusters are interfaced directly to Ma.
• Similarly, driver D3 for cluster 3 is removed prior to integration with
module Mb both Ma and Mb will ultimately be integrated with component
Mc and so forth.
3.REGRESSION TESTING
• Each time a new module is added as part of integration testing, the
software changes.
• New data flow paths are established, new I/O may occur, and new control
logic is invoked. These changes may cause problems with functions that
previously worked flawlessly.
• Regression testing verifies that recent code changes haven't altered or
destroyed the already existing functionality of a system. Regression
testing examples include iteration regression and full regression, and
both can be covered with manual and automated test case
• In the context of an integration test strategy, regression testing is the re-
execution of some subset of tests that have already been conducted to
ensure that changes have not propagated unintended side effects.
4.SMOKE TESTING
• Smoke testing is an integration testing approach that is commonly used when software
products are being developed.
• For example: In a project for the first release, Development team releases the build
for testing and the test team tests the build. Testing the build for the very first time is
to accept or reject the build. This we call it as Smoke Testing.
• It is designed as pacing mechanism for time-critical projects, allowing the software team
to assess its project on a frequent basis.
• The smoke testing approach encompasses the following activities.
1. Software components that have been translated into code are integrated into a
“build”.
2. A series of tests is designed to expose errors that will keep
the build from properly performing its function.
3. The build is integrated with other builds and the entire product
(in its current form) is smoke tested daily. The integration approach may be top down
or bottom up.
VALIDATION TESTING
1. Validation can be defined in many ways, but a simple definition is that
validation succeeds when software functions in a manner that can be
reasonably expected by the customer.
2. Reasonable expectations are defined in the software requirements
specification.
3. Input data rates may be increased by an order of magnitude to determine how
input functions will respond.
4. Test cases that require maximum memory or other resources are executed.
5. Test cases that may cause memory management problems are designed.
6. Test cases that may cause excessive hunting for disk-resident data are created
Essentially, the tester attempts to overwhelm the program.
PERFORMANCE TESTING
1. For real-time and embedded systems , software that provides required function but
does not conform to performance requirements is unacceptable performance testing
is designed to test the run-time performance of software within the context of an
integrated system.
2. Performance testing occurs throughout all steps in the testing process. Even at the
unit level, the performance of an individual module may be assessed as tests are
conducted however, it is not until all system elements are fully integrated that the
true performance of a system can be ascertained.
3. Performance testing is the process of determining the speed, responsiveness and
stability of a computer, network, software program or device under a workload.
4. Performance Testing is done to provide stakeholders with information about their
application regarding speed, stability, and scalability. More importantly, Performance
Testing uncovers what needs to be improved before the product goes to market.
THE ART OF DEBUGGING
1. Software testing is a process that can be systematically planned and specified.
2. Test case design can be conducted, a strategy can be defined and results can be
evaluated against prescribed expectations.
3. Debugging occurs as a consequence of successful testing. That is, when a test case
uncovers an error, debugging is the process that results in the removal of the error.
4. Although debugging can and should be an orderly process, it is still very much an art.
5. Debugging is the process of detecting and removing of existing and potential errors (also
called as 'bugs') in a software code that can cause it to behave unexpectedly or crash. To
prevent incorrect operation of a software or system, debugging is used to find and
resolve bugs or defects.
THE DEBUGGING process
THE DEBUGGING PROCESS
1. Debugging is not testing but always occurs as a consequence of testing. The
debugging process begins with the execution of a test case.
2. Testing is a process of finding bugs or errors in a software product that is done
manually by tester or can be automated. Debugging is a process of fixing the
bugs found in testing phase. Programmer or developer is responsible
for debugging and it can't be automated.
3. Results are assessed and a lack of correspondence between expected and actual
performance is encountered.
4. In many cases, the non-corresponding data are a symptom of an underlying
cause as yet hidden. The debugging process attempts to match symptom with
cause, thereby leading to error correction.
5. The debugging process will always have one of two outcomes:
6. The cause will be found and corrected, or The cause will not be found.
7. In the latter case, the person performing debugging may suspect a cause, design a
test case to help validate that suspicion, and work toward error correction in an
iterative fashion.
8.To observe the reasons behind difficulty in debugging, a few characteristics of
bugs are given below to provide some class
9.The symptom and the cause may be geographically remote, that is, the symptom
may appear in one part of a program, while the cause may actually be located at a
site that is far removed. Highly coupled program structures exacerbate this situation
DEBUGGING APPROACHES
1. Regardless of the approach that is taken, debugging has one overriding
objective to find and correct the cause of a software error. The objective is
realized by a combination of systematic evaluation, intuition and luck.
2. Bradely describes the debugging approach in this way:
3. Debugging is a straightforward application of the scientific method that has
been developed over 2,500 years.
4. The basis of debugging is to locate the problem’s source by binary partitioning,
through working hypotheses that predict new values to be examined.
5. Take a simple non-software example: A lamp in my house does not work. If
nothing in the works, the cause must be in the main circuit breaker or outside; I look
around to see whether the neighbourhood is blacked out. I plug the suspect lamp
into a working socket and a working appliance into the suspect circuit.so goes the
alternation of hypothesis and test.
• In general ,three categories for debugging approaches may be proposed
1. Brute force
2. Backtracking
3. Cause elimination
1.BRUTE FORCE
1. The brute force category of debugging is probably the most common and least
efficient method for isolating the cause of a software error.
2. This is the foremost common technique of debugging however is that the least
economical method.
3. We apply brute force debugging methods when all else fails. Using a “let the
computer find the error” philosophy, memory dumps are taken , run-time
traces are invoked, and the program is located with WRITE statements.
4. We hope that somewhere in the morass of information that is produced we will
find a clue lead us to the cause of an error.
5. Although the mass of information produced may ultimately lead to success, it
more frequently leads to wasted effort and time, thought must be expended
first.
2. BACKTRACKING
1. Backtracking is a fairly common debugging approach that can be used
successfully in small programs. Beginning at the site where a symptom has
been uncovered, the source code is traced backward(manually) until the site of
the cause is found.
2. Unfortunately, as the number of source lines increases, the number of potential
backward paths may become unmanageably large.
3. This is additionally a reasonably common approach. during this approach,
starting from the statement at which an error symptom has been discovered,
the source code is derived backward till the error is discovered.
3.CAUSE ELIMINATION
1. The third approach to debugging , cause elimination, is manifested by induction
or deduction and introduces the concept of binary partitioning.
2. Data related to the error occurrence are organized to isolate potential causes. A
“cause hypothesis” is devised and the aforementioned data are used to prove or
disprove the hypothesis. Alternatively, a list of all possible causes is developed
and tests are conducted to eliminate each.
3. If initial tests indicate that a particular cause hypothesis shows promise, data
are refined in an attempt to isolate the bug.
Each of these debugging approaches can be supplemented with debugging tools.
We can apply a wide variety of debugging compilers, dynamic debugging
aids(“tracers”),automatic test case generators, memory dumps, and cross-reference
maps. However, tools are not a substitute for careful evaluation based on a
complete software design document and clear source code.
Once a bug has been found,it must be corrected.But, as we have already noted,the
correction of a bug can introduce other errors and therefore do more harm than
good. Van Vleck suggests three simple questions that every software enginner
should ask before making the “correction” that removes the cause of a bug:
3.What could we have done to prevent this
bug in the first place?
• This question in the first step toward establishing a statistical software quality
assurance approach. If we correct the process as well as the product, the bug will
be removed from the current program and may be eliminated from all future
programs.
• Changes to the software do not invalidate existing tests.
• The software recovers well from failures.
Understandability
“The more information we have, the smarter we will test.”
• The design is well understood.
• Dependencies between internal, external, and shared components are well
understood.
• Changes to design are communicated
• Technical documentation is instantly accessible.
• Technical documentation is well organized.
• Technical documentation is specific and detailed.
• Technical documentation is accurate.
The attribute suggested by batch can be used by a software engineer to develop a software
configuration(i.e., programs ,data, and documents) that is amenable to testing.
Test characteristics
Following are some attributes of” good” test
• A good test has a high probability of finding an error.
• A good test is not redundant. There is no point in conducting a test that
has the same purpose as another test.
• A good test should be “best of breed”, i.e. the test that has the highest
likelihood of uncovering a whole class of errors should be used.
• A good test should be neither too simple nor too complex.
White-box testing
• White Box Testing is defined as the testing of a software solution's internal structure, design, and
coding. In this type of testing, the code is visible to the tester. It focuses primarily on verifying the
flow of inputs and outputs through the application, improving design and usability, strengthening
security.
• Example
A tester, usually a developer as well, studies the implementation code of a certain field on a
webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs against
the expected outcomes, which is also determined by studying the implementation code.
White Box Testing is like the work of a mechanic who examines the engine to see why the car is not
moving.
• White-box testing sometimes called glass-box testing, is a test case design philosophy that uses
the control structure described as part of component-level design to derive test cases.
• Using white-box testing methods, the software engineer can derive test cases that
o Guarantee that all independent paths within a module have been exercised at least once,
o Exercise all logical decisions on their true and false sides
• Execute all loops at their boundaries and within their operational bounds, and
• Exercise internal data structures to ensure their validity.
• Advantages
• Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
• Testing is more thorough, with the possibility of covering most paths.
• Disadvantages
• Since tests can be very complex, highly skilled resources are required, with a thorough knowledge
of programming and implementation.
• Test script maintenance can be a burden if the implementation changes too frequently.
• Since this method of testing is closely tied to the application being tested, tools to every kind of
implementation/platform may not be readily available.
• Different white-box testing methods are:
o Basic path testing
o Control structure testing
Basic Path Testing
• Basic path testing is a white-box testing technique.
• The basic path method enables the test case designer to
derive a logical complexity measure of a procedural design
and use this measure as a guide for defining a basis set of
execution paths.
• Test cases derived to exercise the basis set are guaranteed to
execute every statement in the program at least one time
during testing.
The basis path testing is same, but it is based on a White Box Testing method, that defines test
cases based on the flows or logical path that can be taken through the program. In software
engineering, Basis path testing involves execution of all possible blocks in a program and
achieves maximum path coverage with the least number of test cases.
Steps for Basis Path testing
The basic steps involved in basis path testing include
● Draw a control graph (to determine different program paths)
● Calculate Cyclomatic complexity (metrics to determine the number of independent paths)
● Find a basis set of paths
● Generate test cases to exercise each path
Cyclomatic Complexity
Cyclomatic complexity is a software metric used to measure the complexity of a program. These
metric, measures independent paths through program source code. Independent path is defined
as a path that has at least one edge which has not been traversed before in any other paths.
Cyclomatic complexity can be calculated with respect to functions, modules, methods or classes
within a program.
In the graph, Nodes represent processing tasks while edges represent control flow between the
nodes.
Flow graph notation for a program:
Flow Graph notation for a program defines several nodes connected through the
edges. Below are Flow diagrams for statements like if-else, While, until and normal
sequence of flow.
How to Calculate Cyclomatic Complexity
Mathematical representation:
Mathematically, it is set of independent paths through the graph diagram. The Code complexity
of the program can be defined using the formula -
V(G) = E - N + 2
Where,
E - Number of edges
N - Number of Nodes
V (G) = P + 1
Where P = Number of predicate nodes (node that contains condition)
Properties of Cyclomatic complexity:
Following are the properties of Cyclomatic complexity:
1. V (G) is the maximum number of independent paths in the graph
2. V (G) >=1
3. G will have one path if V (G) = 1
4. Minimize complexity to 10
How this metric is useful for software testing?
Basis Path testing is one of White box technique and it guarantees to execute atleast one
statement during testing. It checks each linearly independent path through the program, which
means number test cases, will be equivalent to the cyclomatic complexity of the
program.
Uses of Cyclomatic Complexity:
Cyclomatic Complexity can prove to be very helpful in
● Helps developers and testers to determine independent path executions
● Developers can assure that all the paths have been tested atleast once
● Helps us to focus more on the uncovered paths
● Improve code coverage in Software Engineering
● Evaluate the risk associated with the application or program
● Using these metrics early in the cycle reduces more risk of the program
In this example, we can see there are few conditional statements that is executed depending on
what condition it suffice. Here there are 3 paths or condition that need to be tested to get the
output,
● Path 1: 1,2,3,5,6, 7
● Path 2: 1,2,4,5,6, 7
● Path 3: 1, 6, 7
• To illustrate the use of a flow graph, we consider the procedural
design representation in the Fig.7.2(a)
(a) Flowchart (b) Flow Graph
Fig. 7.2
• When stated in terms of a flow graph, an independent path must move along at least one
edge that has not been traversed before the path is defined. For example, a set of
independent paths for the flow graph illustrated in Fig.7.2 is:
path 1: 1-11
path 2: 1-2-3-4-5-10-1-11
path 3: 1-2-3-6-7-9-10-1-11
• Note that each new path introduces a new edge. The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-
11
is not considered to be an independent path because it is simply a combination of already
specified paths and does not traverse any new edges.
• Paths 1,2,3,and 4 constitute a basis set for the flow graph in Fig.7.2.
• That is, if tests can be designed to force execution of these paths(a basis
set), every statement in the program will have been guaranteed to be
executed at least one time, and every condition will have been executed on
its true and false sided.
• It should be noted that the basis set is not unique. In fact, a number of
different basis sets can be derived for a given procedural design.
• Cyclomatic complexity is software metric that provides a quantitative
measure of the logical complexity of a program.
• When used in the context of the basis path testing method, the value computed for Cyclomatic
complexity defines the number of independent paths in the basis set of a program and provides
us with an upper bound for the number of tests that must be conducted to ensure that all
statements have been executed at least once.
• Cyclomatic complexity has a foundation in graph theory and is computed in one of three ways:
o The number of regions corresponds to the cyclomatic complexity.
o Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G)=E-N+2 where E is the
number of flow edges, and N is the number of flow graph nodes.
o Cyclomatic complexity, V(G), for a flow graph, G, is defined as:
V(G)=P+1
where P is the number of predicate nodes contained in the flow graph G.
• Referring once more to the flow graph in Fig.7.2, the cyclomatic complexity can be
computed using each of the algorithms just noted:
1) The flow graph has four regions.
2) V(G)=11 edges-9 nodes+2=4
3) V(G)=3 predicate nodes +1=4.
Deriving Test Cases
• The basis path testing method can be applied to a procedural design or to source code.
• In this section, we present basis path testing as a series of steps.
• The following steps can be applied to derive the basis set:
o Using the design or code as a foundation, draw a corresponding flow graph.
oDetermine the Cyclomatic complexity of the resultant flow graph.
oDetermine a basis set of linearly independent paths.
oPrepare test cases that will force execution of each path in the basis set.
• It is important to note that some independent paths cannot be tested in
stand-alone fashion.
• That is, the combination of data required to traverse the path cannot be
archieved in the normal flow of the program.
In such cases, these paths are tested as part of another path test.
Graph Matrices
• A graph matrix is a tool that assists in basis path testing. A graph matrix is a square matrix
whose size (i.e., number of rows and columns) is equal to the number of nodes on the flow
graph.
• Each row and column corresponds to an identified node , and matrix entries correspond to
connections (an edge) between nodes. A simple example of a flow graph and its
corresponding graph matrix is shown in Fig.4.
• Referring to the Fig.7.4. each node on the flow graph is identified by numbers, while each
edge is identified by letters.
• A letter entry is made in the matrix to correspond to a connection between two nodes. For
example, node 3 is connected to node 4 by edge b.
Control Structure Testing
Control structure testing is a group of white-box testing methods.
Branch Testing
Condition Testing
Data Flow Testing
Loop Testing
1) Branch Testing:- For every decision, each branch needs to be executed at least once also called
decision testing.
shortcoming - ignores implicit paths that result from compound conditionals.
Treats a compound conditional as a single statement. (We count each branch taken out of the decision,
regardless which condition lead to the branch.)
This example has two branches to be executed:
IF ( a equals b) THEN statement 1
ELSE statement 2
END IF
2) Condition Testing:-
Condition testing is a test construction method that focuses on exercising the logical conditions
in a program module.
definition: "For a compound condition C, the true and false branches of C and every simple
condition in C need to be executed at least once."
Multiple-condition testing requires that all true-false combinations of simple conditions be
exercised at least once.
Therefore, all statements, branches, and conditions are necessarily covered.
3) Data Flow Testing:-
Selects test paths according to the location of definitions and use of variables. This is a somewhat sophisticated technique
and is not practical for extensive use. Its use should be targeted to modules with nested if and loop statements.
4) Loop Testing:-
Loops are fundamental to many algorithms and need thorough testing.
There are four different classes of loops: simple, concatenated, nested, and unstructured.
Black-Box Testing
• Black-box testing, also called behavioural testing, focuses on the functional requirements of the software.
• That is, black-box testing enables the software engineer to derive sets of input conditions that will fully exercise
all functional requirements for a program.
• Black-box testing is not an alternative to white-box technique. Rather, it is a complementary approach that is
likely to uncover a different class of errors that white-box methods.
• Black box testing is defined as a testing technique in which functionality of the Application Under Test
(AUT) is tested without looking at the internal code structure, implementation details and knowledge of
internal paths of the software. This type of testing is based entirely on software requirements and
specifications.
• In BlackBox Testing we just focus on inputs and output of the software system without bothering about
internal knowledge of the software program.
• For Example, an operating system like Windows, a website like Google, a database like Oracle or even
your own custom application. Under Black Box Testing, you can test these applications by just focusing
on the inputs and outputs without knowing their internal code implementation.
• Black –box testing attempts to find errors in following categories:
o Incorrect or missing functions,
o Interface errors,
o Errors in data structures or external database access,
o Behaviour or performance errors, and
o Initialization and termination errors.
Difference between Black Box and White
Box Testing
• Different black-box testing methods are:
oGraph-Based Testing Method
oEquivalence partitioning
oBoundary Value Analysis
oOrthogonal Array Testing
• Unlike white-box testing, which is performed early in the testing process, black box testing
tends to be applied during later stages of testing.
Graph-Based Testing Method
• Software testing begins by creating a graph of important objects and their relationships and
then devising a series of tests that will cover the graph so that each object and relationship
is exercised and errors are uncovered.
• To accomplish these steps, the software engineer begins by creating a graph, i.e. a
collection of nodes that represent objects, links that represent the relationships between
objects, node weights that describe the properties of a node, and link weights that describe
some characteristics of a link.
o This technique of Black box testing involves a graph drawing that depicts the link between the causes
(inputs) and the effects (output), which trigger the effects. This testing utilizes different combinations of
output and inputs. It is a helpful technique to understand the software’s functional performance, as it
visualizes the flow of inputs and outputs in a lively fashion.
oIf an input condition specifies a member of set, one valid and one invalid equivalence classes are defined.
If an input condition is Boolean, one valid and one invalid class are defined.
•By applying these guidelines for the derivation of equivalence classes, test cases for each input domain data
object can be developed and executed.
Boundary value Analysis
• Boundary value testing is focused on the values at boundaries. This technique determines whether
a certain range of values are acceptable by the system or not. It is very useful in reducing the
number of test cases. It is most suitable for the systems where an input is within certain ranges.
• A greater number of errors occur at the boundaries of the input domain rather than in the” center”.
• It is for this reason that boundary value analysis(BVA) has been developed as a testing technique. BVA
leads to a selection of test cases that exercise bounding values.
• Boundary value analysis is a test case design technique that complements equivalence partitioning.
• Rather than selecting any element of an equivalence class, BVA leads to
the selection of test cases “edges” of the class.
• Rather than solely on input conditions, BVA derives test cases from the
output domain as well.
• Guidelines for BVA are:
o If an input condition specifies a range bounded by values a and b, test cases should be designed
with values, test cases should be developed that exercise the minimum and maximum numbers.
Values just above and below minimum and maximum are also tested.
o Apply first two guidelines to output conditions.
For example, assume that a temperature vs. pressure table is required as output from an
engineering analysis program.
o Test cases should be designed to create an output report that produces the maximum(and
minimum) allowable number of table entries.
o If internal program data structures have prescribed boundaries(e.g.an array has a defined
limit of 100 entries), be certain to design a test case to exercise the data structure at its
boundary.
o Most software engineers intuitively perform BVA to some degree. By applying these
guidelines, boundary testing will be more complete, thereby having a higher likelihood for
error detection.
Equivalence Partitioning Testing
Equivalent Class Partitioning is a black box technique (code is not visible to tester) which can be applied
to all levels of testing like unit, integration, system, etc. In this technique, you divide the set of test
condition into a partition that can be considered the same.
● It divides the input data of software into different equivalence data classes.
● You can apply this technique, where there is a range in the input field.
Example 1: Equivalence and Boundary Value
● Let's consider the behavior of Order Pizza Text Box Below
● Pizza values 1 to 10 is considered valid. A success message is shown.
● While value 11 to 99 are considered invalid for order and an error message will appear, "Only 10
Pizza can be ordered"
Orthogonal Array Testing
Orthogonal Array Testing (OAT) is defined as a type of software testing strategy that uses
Orthogonal Arrays especially when the system to be tested has huge data inputs.
For example, when a train ticket has to be verified, the factors such as - the number of
passengers, ticket number, seat numbers, and the train numbers has to be tested, which
becomes difficult when a tester verifies input one by one. Hence, it will be more efficient when
he combines more inputs together and does testing. Here, we can use the Orthogonal Array
testing method.
This type of pairing or combining of inputs and testing the system to save time is called Pairwise
testing. OATS technique is used for pairwise testing.
Example:
OAT Advantages
● Guarantees testing of the pair-wise combinations of all the selected variables.
● Reduces the number of test cases
● Creates fewer Test cases which cover the testing of all the combination of all variables.
● A complex combination of the variables can be done.
● Is simpler to generate and less error-prone than test sets created by hand.
● It is useful for Integration Testing.
● It improves productivity due to reduced test cycles and testing times.
OAT Disadvantages
● As the data inputs increase, the complexity of the Test case increases. As a result, manual
effort and time spent increases. Hence, the testers have to go for Automation Testing.
● Useful for Integration Testing of software components.
Risk Management in S/W Engineering
What is Risk?
Risk is the uncertainty which is associated with a future event which may or may not
occur and a corresponding potential for loss.
Very simply, a risk is a potential problem. It’s an activity or event that may compromise
the success of a software development project. Risk is the possibility of suffering loss,
and total risk exposure to a specific project will account for both the probability and the
size of the potential loss.
Types of Risk
The various categories of risks associated with software project management are
enumerated below.
1. Schedule / Time-Related / Delivery Related Planning Risks
2. Budget / Financial Risks
3. Operational / Procedural Risks
4. Technical / Functional / Performance Risks
5. Project Risk
6. Other Unavoidable Risks
Schedule / Time-Related / Delivery Related Planning Risks:These risks are related to
running behind schedule and are essential time-related risks, which directly impact
the delivery of the project.
Budget / Financial Risks:These are the monetary risks which are associated with
budget overruns.Some of the reasons for such risks are
● Improper Budget Estimation
● Cost Overruns due to underutilization of resources
● Expansion of Project Scope
Operational / Procedural Risks:These are risks which are associated with the day-to-
day operational activities of the project.
Technical / Functional / Performance Risks
These are technical risks associated with the functionality of the software or with respect to
the software performance. In order to compensate for excessive budget overruns and
schedule overruns, companies sometimes reduce the functionality of the software.
Other Unavoidable Risks
All the risks described above are those which can be anticipated to a certain extened and
planned for in advance. However there are certain risks which are unavoidable in
natureAlthough these risks are broadly unavoidable, an organization may anticipate and
thereby reduce the impact of such risks by
● keeping abreast with changes in government policy
● monitoring the competition
● catering to the needs of the customer and ensuring customer satisfaction
What Is Risk Management In Software Engineering?
Risk management means risk containment and mitigation. First, you’ve got to identify and plan.
Then be ready to act when a risk arises, drawing upon the experience and knowledge of the
entire team to minimize the impact to the project.
Risk management includes the following tasks:
● Identify risks and their triggers
● Classify and prioritize all risks
● Craft a plan that links each risk to a mitigation
● Monitor for risk triggers during the project
● Implement the mitigating action if any risk materializes
● Communicate risk status throughout project
Risk Projection
Risk projection, also called risk estimation, attempts to rate each risk in two ways—the
likelihood or probability that the risk is real and the consequences of the problems
associated with the risk, should it occur.
The project planner, along with other managers and technical staff, performs four risk
projection activities:
(1) Establish a scale that reflects the perceived likelihood of a risk.
(2) Delineate the consequences of the risk.
(3) Estimate the impact of the risk on the project and the product.
(4) Note the overall accuracy of the risk projection so that there will be no
misunderstandings.
Developing a Risk Table
Risk table provides a project manager with a simple technique for risk projection.
Steps in Setting up Risk Table
(1) Project team begins by listing all risks in the first column of the table.
Accomplished with the help of the risk item checklists.
(2) Each risk is categorized in the second column.
(e.g. PS implies a project size risk, BU implies a business risk).
(3) The probability of occurrence of each risk is entered in the next column of the
table.
The probability value for each risk can be estimated by team members individually.
(4) Individual team members are polled in round-robin fashion until their assessment
of risk probability begins to converge.
Assessing Risk Impact
Nature of the risk - the problems that are likely if it occurs.
e.g. a poorly defined external interface to customer hardware (a technical risk) will
preclude early design and testing and will likely lead to system integration problems
late in a project.
Scope of a risk - combines the severity with its overall distribution (how much of the
project will be affected or how many customers are harmed?).
Timing of a risk - when and how long the impact will be felt.
Overall risk exposure, RE, determined using:
RE = P x C
P is the probability of occurrence for a risk.
C is the the cost to the project should the risk occur.

More Related Content

What's hot

Software testing methods, levels and types
Software testing methods, levels and typesSoftware testing methods, levels and types
Software testing methods, levels and typesConfiz
 
2 testing throughout software lifecycle
2 testing throughout software lifecycle2 testing throughout software lifecycle
2 testing throughout software lifecycleAsmaa Matar
 
testing strategies and tactics
 testing strategies and tactics testing strategies and tactics
testing strategies and tacticsPreeti Mishra
 
Different Software Testing Levels for Detecting Errors
Different Software Testing Levels for Detecting ErrorsDifferent Software Testing Levels for Detecting Errors
Different Software Testing Levels for Detecting ErrorsWaqas Tariq
 
Software Testing Tutorials - MindScripts Technologies, Pune
Software Testing Tutorials - MindScripts Technologies, PuneSoftware Testing Tutorials - MindScripts Technologies, Pune
Software Testing Tutorials - MindScripts Technologies, Punesanjayjadhav8789
 
verification and validation
verification and validationverification and validation
verification and validationDinesh Pasi
 
Testing throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & ImplementationTesting throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & Implementationyogi syafrialdi
 
Chapter 13 software testing strategies
Chapter 13 software testing strategiesChapter 13 software testing strategies
Chapter 13 software testing strategiesSHREEHARI WADAWADAGI
 
Software QA Fundamentals by Prabhath Darshana
Software QA Fundamentals by Prabhath DarshanaSoftware QA Fundamentals by Prabhath Darshana
Software QA Fundamentals by Prabhath DarshanaShamain Peiris
 
Software Testing - Software V&V and selection processes
Software Testing - Software V&V and selection processesSoftware Testing - Software V&V and selection processes
Software Testing - Software V&V and selection processesanasz3z3
 
Software requirement verification & validation
Software requirement verification & validationSoftware requirement verification & validation
Software requirement verification & validationAbdul Basit
 
STLC– software testing life cycle
STLC– software testing life cycleSTLC– software testing life cycle
STLC– software testing life cyclesubash kumar
 
Testing types functional and nonfunctional - Kati Holasz
Testing types   functional and nonfunctional - Kati HolaszTesting types   functional and nonfunctional - Kati Holasz
Testing types functional and nonfunctional - Kati HolaszHolasz Kati
 
A Guideline to Test Your Own Code - Developer Testing
A Guideline to Test Your Own Code - Developer TestingA Guideline to Test Your Own Code - Developer Testing
A Guideline to Test Your Own Code - Developer TestingFolio3 Software
 

What's hot (20)

Software testing methods, levels and types
Software testing methods, levels and typesSoftware testing methods, levels and types
Software testing methods, levels and types
 
Software Verification & Validation
Software Verification & ValidationSoftware Verification & Validation
Software Verification & Validation
 
2 testing throughout software lifecycle
2 testing throughout software lifecycle2 testing throughout software lifecycle
2 testing throughout software lifecycle
 
testing strategies and tactics
 testing strategies and tactics testing strategies and tactics
testing strategies and tactics
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Different Software Testing Levels for Detecting Errors
Different Software Testing Levels for Detecting ErrorsDifferent Software Testing Levels for Detecting Errors
Different Software Testing Levels for Detecting Errors
 
Software maintenance
Software maintenanceSoftware maintenance
Software maintenance
 
Testing Throughout the Software Life Cycle - Section 2
Testing Throughout the Software Life Cycle - Section 2Testing Throughout the Software Life Cycle - Section 2
Testing Throughout the Software Life Cycle - Section 2
 
Software Testing Tutorials - MindScripts Technologies, Pune
Software Testing Tutorials - MindScripts Technologies, PuneSoftware Testing Tutorials - MindScripts Technologies, Pune
Software Testing Tutorials - MindScripts Technologies, Pune
 
verification and validation
verification and validationverification and validation
verification and validation
 
Testing throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & ImplementationTesting throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & Implementation
 
Chapter 13 software testing strategies
Chapter 13 software testing strategiesChapter 13 software testing strategies
Chapter 13 software testing strategies
 
Software QA Fundamentals by Prabhath Darshana
Software QA Fundamentals by Prabhath DarshanaSoftware QA Fundamentals by Prabhath Darshana
Software QA Fundamentals by Prabhath Darshana
 
Software Testing - Software V&V and selection processes
Software Testing - Software V&V and selection processesSoftware Testing - Software V&V and selection processes
Software Testing - Software V&V and selection processes
 
Software requirement verification & validation
Software requirement verification & validationSoftware requirement verification & validation
Software requirement verification & validation
 
STLC– software testing life cycle
STLC– software testing life cycleSTLC– software testing life cycle
STLC– software testing life cycle
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Testing types functional and nonfunctional - Kati Holasz
Testing types   functional and nonfunctional - Kati HolaszTesting types   functional and nonfunctional - Kati Holasz
Testing types functional and nonfunctional - Kati Holasz
 
Software testing and analysis
Software testing and analysisSoftware testing and analysis
Software testing and analysis
 
A Guideline to Test Your Own Code - Developer Testing
A Guideline to Test Your Own Code - Developer TestingA Guideline to Test Your Own Code - Developer Testing
A Guideline to Test Your Own Code - Developer Testing
 

Similar to Software testing

Module V - Software Testing Strategies.pdf
Module V - Software Testing Strategies.pdfModule V - Software Testing Strategies.pdf
Module V - Software Testing Strategies.pdfadhithanr
 
software testing strategies
software testing strategiessoftware testing strategies
software testing strategiesHemanth Gajula
 
Structured system analysis and design
Structured system analysis and design Structured system analysis and design
Structured system analysis and design Jayant Dalvi
 
Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Tanzeem Aslam
 
Software testing
Software testingSoftware testing
Software testingAshu Bansal
 
Testing strategies part -1
Testing strategies part -1Testing strategies part -1
Testing strategies part -1Divya Tiwari
 
What is integration testing
What is integration testingWhat is integration testing
What is integration testingTestingXperts
 
Software Testing Strategy
Software Testing StrategySoftware Testing Strategy
Software Testing StrategyAjeng Savitri
 
Testing throughout the software life cycle
Testing throughout the software life cycleTesting throughout the software life cycle
Testing throughout the software life cycleMusTufa Nullwala
 
Object Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slidesObject Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slidesPunjab University
 
Basic Guide to Manual Testing
Basic Guide to Manual TestingBasic Guide to Manual Testing
Basic Guide to Manual TestingHiral Gosani
 
Software testing
Software testingSoftware testing
Software testingMohdVais1
 
Testing strategies in Software Engineering
Testing strategies in Software EngineeringTesting strategies in Software Engineering
Testing strategies in Software EngineeringMuhammadTalha436
 
Objectorientedtesting 160320132146
Objectorientedtesting 160320132146Objectorientedtesting 160320132146
Objectorientedtesting 160320132146vidhyyav
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testingHaris Jamil
 
Software testing & its technology
Software testing & its technologySoftware testing & its technology
Software testing & its technologyHasam Panezai
 

Similar to Software testing (20)

Module V - Software Testing Strategies.pdf
Module V - Software Testing Strategies.pdfModule V - Software Testing Strategies.pdf
Module V - Software Testing Strategies.pdf
 
software testing strategies
software testing strategiessoftware testing strategies
software testing strategies
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Structured system analysis and design
Structured system analysis and design Structured system analysis and design
Structured system analysis and design
 
Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
Testing strategies part -1
Testing strategies part -1Testing strategies part -1
Testing strategies part -1
 
software engineering
software engineeringsoftware engineering
software engineering
 
What is integration testing
What is integration testingWhat is integration testing
What is integration testing
 
Software Testing Strategy
Software Testing StrategySoftware Testing Strategy
Software Testing Strategy
 
Testing throughout the software life cycle
Testing throughout the software life cycleTesting throughout the software life cycle
Testing throughout the software life cycle
 
Object Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slidesObject Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slides
 
Basic Guide to Manual Testing
Basic Guide to Manual TestingBasic Guide to Manual Testing
Basic Guide to Manual Testing
 
Bai giang-se-06mar14
Bai giang-se-06mar14Bai giang-se-06mar14
Bai giang-se-06mar14
 
Software testing
Software testingSoftware testing
Software testing
 
Testing strategies in Software Engineering
Testing strategies in Software EngineeringTesting strategies in Software Engineering
Testing strategies in Software Engineering
 
Objectorientedtesting 160320132146
Objectorientedtesting 160320132146Objectorientedtesting 160320132146
Objectorientedtesting 160320132146
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testing
 
Software testing & its technology
Software testing & its technologySoftware testing & its technology
Software testing & its technology
 

Recently uploaded

Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...aditisharan08
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...Christina Lin
 
Engage Usergroup 2024 - The Good The Bad_The Ugly
Engage Usergroup 2024 - The Good The Bad_The UglyEngage Usergroup 2024 - The Good The Bad_The Ugly
Engage Usergroup 2024 - The Good The Bad_The UglyFrank van der Linden
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
cybersecurity notes for mca students for learning
cybersecurity notes for mca students for learningcybersecurity notes for mca students for learning
cybersecurity notes for mca students for learningVitsRangannavar
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEOrtus Solutions, Corp
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)OPEN KNOWLEDGE GmbH
 
DNT_Corporate presentation know about us
DNT_Corporate presentation know about usDNT_Corporate presentation know about us
DNT_Corporate presentation know about usDynamic Netsoft
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...OnePlan Solutions
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantAxelRicardoTrocheRiq
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software DevelopersVinodh Ram
 
The Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfThe Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfPower Karaoke
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...soniya singh
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxbodapatigopi8531
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio, Inc.
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataBradBedford3
 
why an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfwhy an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfjoe51371421
 
Project Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationProject Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationkaushalgiri8080
 

Recently uploaded (20)

Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
 
Engage Usergroup 2024 - The Good The Bad_The Ugly
Engage Usergroup 2024 - The Good The Bad_The UglyEngage Usergroup 2024 - The Good The Bad_The Ugly
Engage Usergroup 2024 - The Good The Bad_The Ugly
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
cybersecurity notes for mca students for learning
cybersecurity notes for mca students for learningcybersecurity notes for mca students for learning
cybersecurity notes for mca students for learning
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)
 
DNT_Corporate presentation know about us
DNT_Corporate presentation know about usDNT_Corporate presentation know about us
DNT_Corporate presentation know about us
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service Consultant
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software Developers
 
The Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfThe Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdf
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
 
why an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfwhy an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdf
 
Project Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationProject Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanation
 

Software testing

  • 2. Introduction To Software Testing • Software testing is a process of identifying the correctness of a software by considering its all attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution of software components to find the software bugs or errors or defects.
  • 3.
  • 4. Characteristics of Software Testing • To perform effective testing , a software team should conduct effective formal technical reviews. • Testing begins at the component level. • Different Testing techniques are appropriate at different points in time. • Testing is conducted by developer of software and independent test group( for large project ) • Testing and Debugging are different activities , but debugging must be accommodated in any testing strategy.
  • 5. Verification and Validation • Software testing in one element of a verification and validation. Verification refers to the set of activities that ensure that software correctly implements a specific function. • Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. • The definition of V & V encompasses many of the activities that re encompassed by SQA. • SQA activities include formal technical reviews, quality and configuration audits ,performance monitoring, simulation, feasibility study, documentation review , database review ,algorithm analysis , development testing, usability testing qualification testing and installation testing.
  • 6.
  • 7. 6.1.1 VERIFICATION AND VALIDATION ❖ Software testing is one element of a verification and validation (v&v).verification refers to the set of activities that ensure that the software correctly implements a specific function. ❖Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirement. ❖The definition of v&v encompasses many of the activities that are encompassed by software quality assurance(SQA).
  • 8. ❖ SQA activities include formal technical reviews, quality and configuration audits, performance monitoring, simulation, feasibility study, documentation reviews,database reviews,algorithm analysis,development testing,usability testing, qualification testing, and installation testing.
  • 9. 6.1.2 ORGANIZING FOR SOFTWARE TESTING ❖ For every software project, there is an inherent conflict of interest that occurs as testing begins. The people who have built the software are now asked to test the software. ❖ This seems harmless in itself, after all, who knows the program better than its developers? ❖ Unfortunately, these same developers have a vested interest in demonstrating that the program is error free, that is works according to customer requirements, and that it will be completed on schedule and within budget. Each of the interests mitigate against thorough testing. ❖ From a psychological point of view, software analysis and design (along with coding) are constructive tasks. The software engineer analyzers, models, and then creates a computer program and its documentation.
  • 10. ❖From the customer point of view of the builder, testing can be considered to be (psychologically) destructive. ❖ so the builder treads lightly, designing and executing tests that will demonstrate that the program works, rather then uncovering errors. Unfortunately, error will be present. ❖There are often a number of misconception those can be erroneously inferred from the preceding discussion: 1. that the developer of software should do no testing at all,
  • 11. ❖ TEST STRATEGIES FOR CONVENTIONAL SOFTWARE. ❖ There are many strategies that can be used to test software. At one extreme, a software team could wait until the system is fully constructed and then conduct test on the overall system in hopes of finding errors. ❖ At the other extreme, a software engineer could conduct tests on a daily basis, whenever any part of the system is constructed. ❖ A testing strategy that is chosen by most software teams falls between the two extremes.
  • 12. ❖ It takes an incremental view of testing, beginning with the testing of individual program units, moving to tests designed to facilitate the integration of the units, and culminating with tests that exercise the constructed system. ❖ Each of these classes of tests is described in the sections that follow.
  • 13. Unit testing • Unit testing focuses verification effort on the smallest design i.e the software computer or module. • Unit testing is usually an automated process and performed within the programmers IDE. Unit testing is an action used to validate that separate units of source code remains working properly. Example: - A function, method, Loop or statement in program is working fine. It is executed by the Developer. • Unit test considerations: The tests that occur as part of unit tests are illustrated schematically in Figure.
  • 14.
  • 15. • Local data structures are examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. • All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. • Boundary conditions are tested to ensure that the module operates property at boundaries established to limit or restrict processing. And finally, all error handling paths are tested.
  • 16. • Unit test procedures ❖Unit testing is normally considered as an adjunct to the coding step. The design of unit tests can be performed before coding begins(a preferred agile approach) or after source code has been generated. ❖ A review of design information provides guidance for establishing test cases that are likely to uncover errors in each of the categories. ❖Each test case should be coupled with a set of expected result.
  • 17. UNIT TEST ENVIRONMENT • The unit test environment is illustrated in Fig.6.4
  • 18. • In most applications a driver is nothing more than a “main program” that accepts test case data, passes such data to the component (to be tested), and prints relevant result. • Stubs serve to replace modules that are subordinate modules that are subordinate to (called by) the component to be tested. • A stub or “dummy subprogram” uses the subordinate module’s interface, may do minimal data manipulation, provides verification of entry, and returns control to the module undergoing testing.
  • 19. 2.INTEGRATION TESTING • Integration Testing is defined as a type of testing where software modules are integrated logically and tested as a group.A typical software project consists of multiple software modules, coded by different programmers. • INTEGRATION TESTING is a level of software testing where individual units are combined and tested as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units. Test drivers and test stubs are used to assist in Integration Testing. • Integration testing is a systematic technique for constructing software architecture while at the same time conducting tests to uncover errors associated with interfacing.
  • 20. • The Objective is to take unit tested components and build a program structure that has been dictated by design. • There is often a tendency to attempt pop incremental. All components are combined in advance. The entire program is tested as a whole. • A set of errors is encountered. Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. once these errors are corrected, new ones appear and the process continues in a seemingly endless loop.
  • 21. • In incremental integration the program is constructed and tested in small increments, where errors are easier to isolate and correct , interfaces are more likely to be tested completely, and a systematic test approach may be applied. • A number of different incremental integration strategies are: TOP – DOWN INTEGRATION ⮚ In Top to down approach, testing takes place from top to down following the control flow of the software system. ⮚ Top-down integration testing is an incremental approach to construction of the software architecture. ⮚ Modules are integrated by moving downward through the control hierarchy, beginning with the man control module (main module) module subordinate (and ultimtely subordinate) to the main control module are incorporated into the structure in either a depth-first or breath-first manner. ⮚ Referring to Figure ,depth-first integration integrates all components on a major control path of the program structure.
  • 22.
  • 23. BOTTOM-UP INTEGRATION ⮚ In the bottom-up strategy, each module at lower levels is tested with higher modules until all modules are tested. It takes help of Drivers for testing ⮚ Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e.,components at the lowest levels in the program structure). ⮚ because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated. ⮚ A bottom-up integration strategy may be implemented with the following steps: 1. Low-level components are combined into clusters that perform a specific software sub function. 2. A driver is written to coordinate test case input and output. 3. The cluster is tested. 4. Drivers are removed and clusters are combined moving upward in the program structure.
  • 24. • Integration follows the pattern illustrated in Fig.6.6
  • 25. • Components are combined to form clusters 1,2,and 3. Each of the clusters is tested using (shown as a dashed block). • Components in clusters 1 and 2 are subordinate to Ma.Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma. • Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb both Ma and Mb will ultimately be integrated with component Mc and so forth.
  • 26. 3.REGRESSION TESTING • Each time a new module is added as part of integration testing, the software changes. • New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. • Regression testing verifies that recent code changes haven't altered or destroyed the already existing functionality of a system. Regression testing examples include iteration regression and full regression, and both can be covered with manual and automated test case • In the context of an integration test strategy, regression testing is the re- execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
  • 27. 4.SMOKE TESTING • Smoke testing is an integration testing approach that is commonly used when software products are being developed. • For example: In a project for the first release, Development team releases the build for testing and the test team tests the build. Testing the build for the very first time is to accept or reject the build. This we call it as Smoke Testing. • It is designed as pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis. • The smoke testing approach encompasses the following activities. 1. Software components that have been translated into code are integrated into a “build”. 2. A series of tests is designed to expose errors that will keep the build from properly performing its function. 3. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be top down or bottom up.
  • 28. VALIDATION TESTING 1. Validation can be defined in many ways, but a simple definition is that validation succeeds when software functions in a manner that can be reasonably expected by the customer. 2. Reasonable expectations are defined in the software requirements specification. 3. Input data rates may be increased by an order of magnitude to determine how input functions will respond. 4. Test cases that require maximum memory or other resources are executed. 5. Test cases that may cause memory management problems are designed. 6. Test cases that may cause excessive hunting for disk-resident data are created Essentially, the tester attempts to overwhelm the program.
  • 29. PERFORMANCE TESTING 1. For real-time and embedded systems , software that provides required function but does not conform to performance requirements is unacceptable performance testing is designed to test the run-time performance of software within the context of an integrated system. 2. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may be assessed as tests are conducted however, it is not until all system elements are fully integrated that the true performance of a system can be ascertained. 3. Performance testing is the process of determining the speed, responsiveness and stability of a computer, network, software program or device under a workload. 4. Performance Testing is done to provide stakeholders with information about their application regarding speed, stability, and scalability. More importantly, Performance Testing uncovers what needs to be improved before the product goes to market.
  • 30. THE ART OF DEBUGGING 1. Software testing is a process that can be systematically planned and specified. 2. Test case design can be conducted, a strategy can be defined and results can be evaluated against prescribed expectations. 3. Debugging occurs as a consequence of successful testing. That is, when a test case uncovers an error, debugging is the process that results in the removal of the error. 4. Although debugging can and should be an orderly process, it is still very much an art. 5. Debugging is the process of detecting and removing of existing and potential errors (also called as 'bugs') in a software code that can cause it to behave unexpectedly or crash. To prevent incorrect operation of a software or system, debugging is used to find and resolve bugs or defects.
  • 32. THE DEBUGGING PROCESS 1. Debugging is not testing but always occurs as a consequence of testing. The debugging process begins with the execution of a test case. 2. Testing is a process of finding bugs or errors in a software product that is done manually by tester or can be automated. Debugging is a process of fixing the bugs found in testing phase. Programmer or developer is responsible for debugging and it can't be automated. 3. Results are assessed and a lack of correspondence between expected and actual performance is encountered. 4. In many cases, the non-corresponding data are a symptom of an underlying cause as yet hidden. The debugging process attempts to match symptom with cause, thereby leading to error correction.
  • 33. 5. The debugging process will always have one of two outcomes: 6. The cause will be found and corrected, or The cause will not be found. 7. In the latter case, the person performing debugging may suspect a cause, design a test case to help validate that suspicion, and work toward error correction in an iterative fashion. 8.To observe the reasons behind difficulty in debugging, a few characteristics of bugs are given below to provide some class 9.The symptom and the cause may be geographically remote, that is, the symptom may appear in one part of a program, while the cause may actually be located at a site that is far removed. Highly coupled program structures exacerbate this situation
  • 34. DEBUGGING APPROACHES 1. Regardless of the approach that is taken, debugging has one overriding objective to find and correct the cause of a software error. The objective is realized by a combination of systematic evaluation, intuition and luck. 2. Bradely describes the debugging approach in this way: 3. Debugging is a straightforward application of the scientific method that has been developed over 2,500 years. 4. The basis of debugging is to locate the problem’s source by binary partitioning, through working hypotheses that predict new values to be examined.
  • 35. 5. Take a simple non-software example: A lamp in my house does not work. If nothing in the works, the cause must be in the main circuit breaker or outside; I look around to see whether the neighbourhood is blacked out. I plug the suspect lamp into a working socket and a working appliance into the suspect circuit.so goes the alternation of hypothesis and test. • In general ,three categories for debugging approaches may be proposed 1. Brute force 2. Backtracking 3. Cause elimination
  • 36. 1.BRUTE FORCE 1. The brute force category of debugging is probably the most common and least efficient method for isolating the cause of a software error. 2. This is the foremost common technique of debugging however is that the least economical method. 3. We apply brute force debugging methods when all else fails. Using a “let the computer find the error” philosophy, memory dumps are taken , run-time traces are invoked, and the program is located with WRITE statements. 4. We hope that somewhere in the morass of information that is produced we will find a clue lead us to the cause of an error. 5. Although the mass of information produced may ultimately lead to success, it more frequently leads to wasted effort and time, thought must be expended first.
  • 37. 2. BACKTRACKING 1. Backtracking is a fairly common debugging approach that can be used successfully in small programs. Beginning at the site where a symptom has been uncovered, the source code is traced backward(manually) until the site of the cause is found. 2. Unfortunately, as the number of source lines increases, the number of potential backward paths may become unmanageably large. 3. This is additionally a reasonably common approach. during this approach, starting from the statement at which an error symptom has been discovered, the source code is derived backward till the error is discovered.
  • 38. 3.CAUSE ELIMINATION 1. The third approach to debugging , cause elimination, is manifested by induction or deduction and introduces the concept of binary partitioning. 2. Data related to the error occurrence are organized to isolate potential causes. A “cause hypothesis” is devised and the aforementioned data are used to prove or disprove the hypothesis. Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each. 3. If initial tests indicate that a particular cause hypothesis shows promise, data are refined in an attempt to isolate the bug.
  • 39. Each of these debugging approaches can be supplemented with debugging tools. We can apply a wide variety of debugging compilers, dynamic debugging aids(“tracers”),automatic test case generators, memory dumps, and cross-reference maps. However, tools are not a substitute for careful evaluation based on a complete software design document and clear source code. Once a bug has been found,it must be corrected.But, as we have already noted,the correction of a bug can introduce other errors and therefore do more harm than good. Van Vleck suggests three simple questions that every software enginner should ask before making the “correction” that removes the cause of a bug:
  • 40. 3.What could we have done to prevent this bug in the first place? • This question in the first step toward establishing a statistical software quality assurance approach. If we correct the process as well as the product, the bug will be removed from the current program and may be eliminated from all future programs. • Changes to the software do not invalidate existing tests. • The software recovers well from failures.
  • 41. Understandability “The more information we have, the smarter we will test.” • The design is well understood. • Dependencies between internal, external, and shared components are well understood. • Changes to design are communicated • Technical documentation is instantly accessible.
  • 42. • Technical documentation is well organized. • Technical documentation is specific and detailed. • Technical documentation is accurate. The attribute suggested by batch can be used by a software engineer to develop a software configuration(i.e., programs ,data, and documents) that is amenable to testing.
  • 43. Test characteristics Following are some attributes of” good” test • A good test has a high probability of finding an error. • A good test is not redundant. There is no point in conducting a test that has the same purpose as another test. • A good test should be “best of breed”, i.e. the test that has the highest likelihood of uncovering a whole class of errors should be used. • A good test should be neither too simple nor too complex.
  • 44. White-box testing • White Box Testing is defined as the testing of a software solution's internal structure, design, and coding. In this type of testing, the code is visible to the tester. It focuses primarily on verifying the flow of inputs and outputs through the application, improving design and usability, strengthening security. • Example A tester, usually a developer as well, studies the implementation code of a certain field on a webpage, determines all legal (valid and invalid) AND illegal inputs and verifies the outputs against the expected outcomes, which is also determined by studying the implementation code. White Box Testing is like the work of a mechanic who examines the engine to see why the car is not moving. • White-box testing sometimes called glass-box testing, is a test case design philosophy that uses the control structure described as part of component-level design to derive test cases. • Using white-box testing methods, the software engineer can derive test cases that o Guarantee that all independent paths within a module have been exercised at least once, o Exercise all logical decisions on their true and false sides
  • 45. • Execute all loops at their boundaries and within their operational bounds, and • Exercise internal data structures to ensure their validity. • Advantages • Testing can be commenced at an earlier stage. One need not wait for the GUI to be available. • Testing is more thorough, with the possibility of covering most paths. • Disadvantages • Since tests can be very complex, highly skilled resources are required, with a thorough knowledge of programming and implementation. • Test script maintenance can be a burden if the implementation changes too frequently. • Since this method of testing is closely tied to the application being tested, tools to every kind of implementation/platform may not be readily available. • Different white-box testing methods are: o Basic path testing o Control structure testing
  • 46. Basic Path Testing • Basic path testing is a white-box testing technique. • The basic path method enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. • Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.
  • 47. The basis path testing is same, but it is based on a White Box Testing method, that defines test cases based on the flows or logical path that can be taken through the program. In software engineering, Basis path testing involves execution of all possible blocks in a program and achieves maximum path coverage with the least number of test cases. Steps for Basis Path testing The basic steps involved in basis path testing include ● Draw a control graph (to determine different program paths) ● Calculate Cyclomatic complexity (metrics to determine the number of independent paths) ● Find a basis set of paths ● Generate test cases to exercise each path
  • 48. Cyclomatic Complexity Cyclomatic complexity is a software metric used to measure the complexity of a program. These metric, measures independent paths through program source code. Independent path is defined as a path that has at least one edge which has not been traversed before in any other paths. Cyclomatic complexity can be calculated with respect to functions, modules, methods or classes within a program. In the graph, Nodes represent processing tasks while edges represent control flow between the nodes.
  • 49. Flow graph notation for a program: Flow Graph notation for a program defines several nodes connected through the edges. Below are Flow diagrams for statements like if-else, While, until and normal sequence of flow.
  • 50. How to Calculate Cyclomatic Complexity Mathematical representation: Mathematically, it is set of independent paths through the graph diagram. The Code complexity of the program can be defined using the formula - V(G) = E - N + 2 Where, E - Number of edges N - Number of Nodes V (G) = P + 1 Where P = Number of predicate nodes (node that contains condition)
  • 51. Properties of Cyclomatic complexity: Following are the properties of Cyclomatic complexity: 1. V (G) is the maximum number of independent paths in the graph 2. V (G) >=1 3. G will have one path if V (G) = 1 4. Minimize complexity to 10 How this metric is useful for software testing? Basis Path testing is one of White box technique and it guarantees to execute atleast one statement during testing. It checks each linearly independent path through the program, which means number test cases, will be equivalent to the cyclomatic complexity of the program.
  • 52. Uses of Cyclomatic Complexity: Cyclomatic Complexity can prove to be very helpful in ● Helps developers and testers to determine independent path executions ● Developers can assure that all the paths have been tested atleast once ● Helps us to focus more on the uncovered paths ● Improve code coverage in Software Engineering ● Evaluate the risk associated with the application or program ● Using these metrics early in the cycle reduces more risk of the program
  • 53. In this example, we can see there are few conditional statements that is executed depending on what condition it suffice. Here there are 3 paths or condition that need to be tested to get the output, ● Path 1: 1,2,3,5,6, 7 ● Path 2: 1,2,4,5,6, 7 ● Path 3: 1, 6, 7
  • 54. • To illustrate the use of a flow graph, we consider the procedural design representation in the Fig.7.2(a) (a) Flowchart (b) Flow Graph Fig. 7.2
  • 55. • When stated in terms of a flow graph, an independent path must move along at least one edge that has not been traversed before the path is defined. For example, a set of independent paths for the flow graph illustrated in Fig.7.2 is: path 1: 1-11 path 2: 1-2-3-4-5-10-1-11 path 3: 1-2-3-6-7-9-10-1-11 • Note that each new path introduces a new edge. The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1- 11 is not considered to be an independent path because it is simply a combination of already specified paths and does not traverse any new edges.
  • 56. • Paths 1,2,3,and 4 constitute a basis set for the flow graph in Fig.7.2. • That is, if tests can be designed to force execution of these paths(a basis set), every statement in the program will have been guaranteed to be executed at least one time, and every condition will have been executed on its true and false sided. • It should be noted that the basis set is not unique. In fact, a number of different basis sets can be derived for a given procedural design. • Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a program.
  • 57. • When used in the context of the basis path testing method, the value computed for Cyclomatic complexity defines the number of independent paths in the basis set of a program and provides us with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once. • Cyclomatic complexity has a foundation in graph theory and is computed in one of three ways: o The number of regions corresponds to the cyclomatic complexity. o Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G)=E-N+2 where E is the number of flow edges, and N is the number of flow graph nodes. o Cyclomatic complexity, V(G), for a flow graph, G, is defined as: V(G)=P+1 where P is the number of predicate nodes contained in the flow graph G.
  • 58. • Referring once more to the flow graph in Fig.7.2, the cyclomatic complexity can be computed using each of the algorithms just noted: 1) The flow graph has four regions. 2) V(G)=11 edges-9 nodes+2=4 3) V(G)=3 predicate nodes +1=4.
  • 59. Deriving Test Cases • The basis path testing method can be applied to a procedural design or to source code. • In this section, we present basis path testing as a series of steps. • The following steps can be applied to derive the basis set: o Using the design or code as a foundation, draw a corresponding flow graph.
  • 60. oDetermine the Cyclomatic complexity of the resultant flow graph. oDetermine a basis set of linearly independent paths. oPrepare test cases that will force execution of each path in the basis set. • It is important to note that some independent paths cannot be tested in stand-alone fashion. • That is, the combination of data required to traverse the path cannot be archieved in the normal flow of the program. In such cases, these paths are tested as part of another path test.
  • 61. Graph Matrices • A graph matrix is a tool that assists in basis path testing. A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of nodes on the flow graph. • Each row and column corresponds to an identified node , and matrix entries correspond to connections (an edge) between nodes. A simple example of a flow graph and its corresponding graph matrix is shown in Fig.4.
  • 62. • Referring to the Fig.7.4. each node on the flow graph is identified by numbers, while each edge is identified by letters. • A letter entry is made in the matrix to correspond to a connection between two nodes. For example, node 3 is connected to node 4 by edge b.
  • 63. Control Structure Testing Control structure testing is a group of white-box testing methods. Branch Testing Condition Testing Data Flow Testing Loop Testing 1) Branch Testing:- For every decision, each branch needs to be executed at least once also called decision testing. shortcoming - ignores implicit paths that result from compound conditionals. Treats a compound conditional as a single statement. (We count each branch taken out of the decision, regardless which condition lead to the branch.)
  • 64. This example has two branches to be executed: IF ( a equals b) THEN statement 1 ELSE statement 2 END IF 2) Condition Testing:- Condition testing is a test construction method that focuses on exercising the logical conditions in a program module. definition: "For a compound condition C, the true and false branches of C and every simple condition in C need to be executed at least once." Multiple-condition testing requires that all true-false combinations of simple conditions be exercised at least once. Therefore, all statements, branches, and conditions are necessarily covered.
  • 65. 3) Data Flow Testing:- Selects test paths according to the location of definitions and use of variables. This is a somewhat sophisticated technique and is not practical for extensive use. Its use should be targeted to modules with nested if and loop statements. 4) Loop Testing:- Loops are fundamental to many algorithms and need thorough testing. There are four different classes of loops: simple, concatenated, nested, and unstructured.
  • 66. Black-Box Testing • Black-box testing, also called behavioural testing, focuses on the functional requirements of the software. • That is, black-box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. • Black-box testing is not an alternative to white-box technique. Rather, it is a complementary approach that is likely to uncover a different class of errors that white-box methods. • Black box testing is defined as a testing technique in which functionality of the Application Under Test (AUT) is tested without looking at the internal code structure, implementation details and knowledge of internal paths of the software. This type of testing is based entirely on software requirements and specifications. • In BlackBox Testing we just focus on inputs and output of the software system without bothering about internal knowledge of the software program. • For Example, an operating system like Windows, a website like Google, a database like Oracle or even your own custom application. Under Black Box Testing, you can test these applications by just focusing on the inputs and outputs without knowing their internal code implementation.
  • 67. • Black –box testing attempts to find errors in following categories: o Incorrect or missing functions, o Interface errors, o Errors in data structures or external database access, o Behaviour or performance errors, and o Initialization and termination errors.
  • 68. Difference between Black Box and White Box Testing
  • 69. • Different black-box testing methods are: oGraph-Based Testing Method oEquivalence partitioning oBoundary Value Analysis oOrthogonal Array Testing • Unlike white-box testing, which is performed early in the testing process, black box testing tends to be applied during later stages of testing.
  • 70. Graph-Based Testing Method • Software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each object and relationship is exercised and errors are uncovered. • To accomplish these steps, the software engineer begins by creating a graph, i.e. a collection of nodes that represent objects, links that represent the relationships between objects, node weights that describe the properties of a node, and link weights that describe some characteristics of a link.
  • 71. o This technique of Black box testing involves a graph drawing that depicts the link between the causes (inputs) and the effects (output), which trigger the effects. This testing utilizes different combinations of output and inputs. It is a helpful technique to understand the software’s functional performance, as it visualizes the flow of inputs and outputs in a lively fashion. oIf an input condition specifies a member of set, one valid and one invalid equivalence classes are defined. If an input condition is Boolean, one valid and one invalid class are defined. •By applying these guidelines for the derivation of equivalence classes, test cases for each input domain data object can be developed and executed.
  • 72. Boundary value Analysis • Boundary value testing is focused on the values at boundaries. This technique determines whether a certain range of values are acceptable by the system or not. It is very useful in reducing the number of test cases. It is most suitable for the systems where an input is within certain ranges. • A greater number of errors occur at the boundaries of the input domain rather than in the” center”. • It is for this reason that boundary value analysis(BVA) has been developed as a testing technique. BVA leads to a selection of test cases that exercise bounding values. • Boundary value analysis is a test case design technique that complements equivalence partitioning.
  • 73. • Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases “edges” of the class. • Rather than solely on input conditions, BVA derives test cases from the output domain as well. • Guidelines for BVA are: o If an input condition specifies a range bounded by values a and b, test cases should be designed with values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested.
  • 74. o Apply first two guidelines to output conditions. For example, assume that a temperature vs. pressure table is required as output from an engineering analysis program. o Test cases should be designed to create an output report that produces the maximum(and minimum) allowable number of table entries. o If internal program data structures have prescribed boundaries(e.g.an array has a defined limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary. o Most software engineers intuitively perform BVA to some degree. By applying these guidelines, boundary testing will be more complete, thereby having a higher likelihood for error detection.
  • 75. Equivalence Partitioning Testing Equivalent Class Partitioning is a black box technique (code is not visible to tester) which can be applied to all levels of testing like unit, integration, system, etc. In this technique, you divide the set of test condition into a partition that can be considered the same. ● It divides the input data of software into different equivalence data classes. ● You can apply this technique, where there is a range in the input field. Example 1: Equivalence and Boundary Value ● Let's consider the behavior of Order Pizza Text Box Below ● Pizza values 1 to 10 is considered valid. A success message is shown. ● While value 11 to 99 are considered invalid for order and an error message will appear, "Only 10 Pizza can be ordered"
  • 76.
  • 77. Orthogonal Array Testing Orthogonal Array Testing (OAT) is defined as a type of software testing strategy that uses Orthogonal Arrays especially when the system to be tested has huge data inputs. For example, when a train ticket has to be verified, the factors such as - the number of passengers, ticket number, seat numbers, and the train numbers has to be tested, which becomes difficult when a tester verifies input one by one. Hence, it will be more efficient when he combines more inputs together and does testing. Here, we can use the Orthogonal Array testing method. This type of pairing or combining of inputs and testing the system to save time is called Pairwise testing. OATS technique is used for pairwise testing.
  • 79.
  • 80. OAT Advantages ● Guarantees testing of the pair-wise combinations of all the selected variables. ● Reduces the number of test cases ● Creates fewer Test cases which cover the testing of all the combination of all variables. ● A complex combination of the variables can be done. ● Is simpler to generate and less error-prone than test sets created by hand. ● It is useful for Integration Testing. ● It improves productivity due to reduced test cycles and testing times. OAT Disadvantages ● As the data inputs increase, the complexity of the Test case increases. As a result, manual effort and time spent increases. Hence, the testers have to go for Automation Testing. ● Useful for Integration Testing of software components.
  • 81. Risk Management in S/W Engineering What is Risk? Risk is the uncertainty which is associated with a future event which may or may not occur and a corresponding potential for loss. Very simply, a risk is a potential problem. It’s an activity or event that may compromise the success of a software development project. Risk is the possibility of suffering loss, and total risk exposure to a specific project will account for both the probability and the size of the potential loss.
  • 82. Types of Risk The various categories of risks associated with software project management are enumerated below. 1. Schedule / Time-Related / Delivery Related Planning Risks 2. Budget / Financial Risks 3. Operational / Procedural Risks 4. Technical / Functional / Performance Risks 5. Project Risk 6. Other Unavoidable Risks
  • 83. Schedule / Time-Related / Delivery Related Planning Risks:These risks are related to running behind schedule and are essential time-related risks, which directly impact the delivery of the project. Budget / Financial Risks:These are the monetary risks which are associated with budget overruns.Some of the reasons for such risks are ● Improper Budget Estimation ● Cost Overruns due to underutilization of resources ● Expansion of Project Scope Operational / Procedural Risks:These are risks which are associated with the day-to- day operational activities of the project.
  • 84. Technical / Functional / Performance Risks These are technical risks associated with the functionality of the software or with respect to the software performance. In order to compensate for excessive budget overruns and schedule overruns, companies sometimes reduce the functionality of the software. Other Unavoidable Risks All the risks described above are those which can be anticipated to a certain extened and planned for in advance. However there are certain risks which are unavoidable in natureAlthough these risks are broadly unavoidable, an organization may anticipate and thereby reduce the impact of such risks by ● keeping abreast with changes in government policy ● monitoring the competition ● catering to the needs of the customer and ensuring customer satisfaction
  • 85.
  • 86. What Is Risk Management In Software Engineering? Risk management means risk containment and mitigation. First, you’ve got to identify and plan. Then be ready to act when a risk arises, drawing upon the experience and knowledge of the entire team to minimize the impact to the project. Risk management includes the following tasks: ● Identify risks and their triggers ● Classify and prioritize all risks ● Craft a plan that links each risk to a mitigation ● Monitor for risk triggers during the project ● Implement the mitigating action if any risk materializes ● Communicate risk status throughout project
  • 87. Risk Projection Risk projection, also called risk estimation, attempts to rate each risk in two ways—the likelihood or probability that the risk is real and the consequences of the problems associated with the risk, should it occur. The project planner, along with other managers and technical staff, performs four risk projection activities: (1) Establish a scale that reflects the perceived likelihood of a risk. (2) Delineate the consequences of the risk. (3) Estimate the impact of the risk on the project and the product. (4) Note the overall accuracy of the risk projection so that there will be no misunderstandings.
  • 88. Developing a Risk Table Risk table provides a project manager with a simple technique for risk projection. Steps in Setting up Risk Table (1) Project team begins by listing all risks in the first column of the table. Accomplished with the help of the risk item checklists. (2) Each risk is categorized in the second column. (e.g. PS implies a project size risk, BU implies a business risk). (3) The probability of occurrence of each risk is entered in the next column of the table. The probability value for each risk can be estimated by team members individually. (4) Individual team members are polled in round-robin fashion until their assessment of risk probability begins to converge.
  • 89. Assessing Risk Impact Nature of the risk - the problems that are likely if it occurs. e.g. a poorly defined external interface to customer hardware (a technical risk) will preclude early design and testing and will likely lead to system integration problems late in a project. Scope of a risk - combines the severity with its overall distribution (how much of the project will be affected or how many customers are harmed?). Timing of a risk - when and how long the impact will be felt. Overall risk exposure, RE, determined using: RE = P x C P is the probability of occurrence for a risk. C is the the cost to the project should the risk occur.