SOFTWARE TESTING
Chapter 10: SQA by
Nina gadbole
Chapter 17,18,19,20
Software Engineering
byR.S. Pressman
Chapter 1,3, …more
from Software testing
and QA by Naik &
V & V
Verification refers to the set of tasks that
ensure that software correctly implements a
specific function.
Validation refers to a different set of tasks
that ensure that the software that has been
built is traceable to customer requirements.
Boehm [Boe81] states this another way:
 Verification: "Are we building the product right?"
 Validation: "Are we building the right product?"
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
2
Static vs. Dynamic
OBSERVATIONS ABOUT
TESTING
“Testing is the process of executing a
program with the intention of finding
errors.” – Myers
“Testing can show the presence of bugs
but never their absence.” - Dijkstra
TESTING
Testing is the process of exercising a
program with the specific intent of
finding errors prior to delivery to the
end user(IEEE).
PURPOSE OF TESTING
It does work
It does not work
Reduce the risk of failures
Reduce the cost of testing
GOOD TESTING PRACTICES
A good test case is one that has a high
probability of detecting an undiscovered
defect, not one that shows that the
program works correctly
It is impossible to test your own
program
A necessary part of every test case is a
description of the expected result
GOOD TESTING PRACTICES
(CONT’D)
Avoid nonreproducible or on-the-fly
testing
Write test cases for valid as well as
invalid input conditions.
Thoroughly inspect the results of each
test
As the number of detected defects in a
piece of software increases, the
probability of the existence of more
undetected defects also increases
GOOD TESTING PRACTICES
(CONT’D)
Assign your best people to testing
Ensure that testability is a key objective
in your software design
Never alter the program to make testing
easier
Testing, like almost every other activity,
must start with objectives
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 10
WHAT TESTING
SHOWS
errors
requirements conformance
performance
an indication
of quality
TESTING LIFE CYCLE (FROM
NINA)
1. Decision to enter Test phase
2. Testing process Introduction
3. Test plan preparation
4. Test case development
5. Execution and management of tests
6. Process evaluation and improvement
ROLES AND
RESPONSIBILITIES
TERMINOLOGY (IEEE 829)
3.1 design level: The design
decomposition of the software item
(e.g., system, subsystem, program, or
module).
3.2 pass/fail criteria: Decision rules
used to determine whether a software
item or a software feature passes or fails
a test.
3.3 software feature: A distinguishing
characteristic of a software item (e.g.,
TERMINOLOGY
3.5 test:
(A) A set of one or more test cases, or
(B) A set of one or more test procedures,
or
(C) A set of one
or more test cases and procedures.
TERMINOLOGY
3.6 test case specification:
A document specifying inputs, predicted
results, and a set of execution
conditions for a test item.
3.7 test design specification:
A document specifying the details of the
test approach for a software feature
or combination of software features and
identifying the associated tests.
TERMINOLOGY
3.8 test incident report:
A document reporting on any event that
occurs during the testing process which
requires investigation.
3.9 testing: The process of analyzing a
software item to detect the differences
between existing and required
conditions (that is, bugs) and to evaluate
the features of the software item.
TERMINOLOGY
3.10 test item:
A software item which is an object of
testing.
3.11 test item transmittal report:
A document identifying test items. It
contains current status and location
information.
TERMINOLOGY
3.12 test log:
A chronological record of relevant
details about the execution of tests.
3.13 test plan:
A document describing the scope,
approach, resources, and schedule of
intended testing
activities. It identifies test items, the
features to be tested, the testing tasks,
who will do each task, and any
TERMINOLOGY
risks requiring contingency planning.
3.14 test procedure specification:
A document specifying a sequence of
actions for the execution of a test.
3.15 test summary report:
A document summarizing testing
activities and results. It also contains an
evaluation
of the corresponding test items.
20
WHAT IS A TEST CASE?
Test Case is a simple pair of <input, expected
outcome>
State-less systems: A compiler is a stateless system
 Test cases are very simple
 Outcome depends solely on the current input
State-oriented: ATM is a state oriented system
 Test cases are not that simple. A test case may consist of a
sequences of <input, expected outcome>
 The outcome depends both on the current state of the system and the current input
 ATM example:
 < check balance, $500.00 >,
 < withdraw, “amount?” >,
 < $200.00, “$200.00” >,
 < check balance, $300.00 >
21
EXPECTED OUTCOME
An outcome of program execution may include
 Value produced by the program
 State Change
 A sequence of values which must be interpreted together for the
outcome to be valid
Verify the correctness of program outputs
 Generate expected results for the test inputs
 Compare the expected results with the actual results of execution of
the IUT (implementation under testing)
TEST ARTIFACTS(IEEE)
Test plan
Test Specification
Test Script
Test Report
EXHAUSTIVE TESTING
loop < 20 X
There are 10 possible paths! If we execute one
test per millisecond, it would take 3,170 years to
test this program!!
14
SELECTIVE TESTING
loop < 20 X
Selected path
26
THE CONCEPT OF COMPLETE
TESTING
Complete or exhaustive testing means
“There are no undisclosed faults at the end of test
phase”
Complete testing is near impossible for
most of the system
 The domain of possible inputs of a program is too large
 Valid inputs
 Invalid inputs
 The design issues may be too complex to completely
test
 It may not be possible to create all possible execution
environments of the system
TEST CASE
DESIGN
"Bugs lurk in corners
and congregate at
boundaries ..."
Boris Beizer
OBJECTIVE
CRITERIA
CONSTRAINT
to uncover errors
in a complete manner
with a minimum of effort and time
WHITE-BOX
TESTING
... our goal is to ensure that all
statements and conditions have
been executed at least once ...
WHY
COVER?
logic errors and incorrect assumptions
are inversely proportional to a path's
execution probability
we often believe that a path is not
likely to be executed; in fact, reality is
often counter intuitive
typographical errors are random; it's
likely that untested paths will contain
some
CYCLOMATIC COMPLEXITY
►Is a software metric that gives the
quantitative indication of the logical
complexity of software.
►Tells the number of independent paths
in basis path testing.
BASIS PATH TESTING
First, we compute the cyclomatic
complexity:
number of simple decisions + 1
or
number of enclosed areas + 1
In this case, V(G) = 4
1
2 3
4
6
7
9
5
1
2
3
4
6
5
9
7
8
8
10
11
Or
Edges – Nodes + 2
BASIS PATH
TESTING
First, we compute the cyclomatic
complexity:
number of simple decisions + 1
or
number of enclosed areas + 1
In this case, V(G) = 4
CYCLOMATIC
COMPLEXITY
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.
V(G)
modules
modules in this range are
more error prone
BASIS PATH
TESTING
Next, we derive the
independent paths:
Since V(G) = 4,
there are four paths
Path 1: 1,2,3,6,7,8
Path 2: 1,2,3,5,7,8
Path 3: 1,2,4,7,8
Path 4: 1,2,4,7,2,4,...7,8
Finally, we derive test
cases to exercise these
paths.
1
2
3
4
5 6
7
8
BASIS PATH TESTING
NOTES
you don't need a flow chart,
but the picture will help when
you trace program paths
count each simple logical test,
compound tests count as 2 or
more
basis path testing should be
applied to critical modules
36
CONTROL FLOW GRAPH
• Example code: ReturnAverage()
public static double ReturnAverage(int value[], int AS, int MIN, int MAX){
/* Function: ReturnAverage Computes the average of all those numbers in the input array in
the positive range [MIN, MAX]. The maximum size of the array is AS. But, the array size
could be smaller than AS in which case the end of input is represented by -999. */
int i, ti, tv, sum;
double av;
i = 0; ti = 0; tv = 0; sum = 0;
while (ti < AS && value[i] != -999) {
ti++;
if (value[i] >= MIN && value[i] <= MAX) {
tv++;
sum = sum + value[i];
}
i++;
}
if (tv > 0)
av = (double)sum/tv;
else
av = (double) -999;
return (av);
}
Figure 4.6: A function to compute the average of selected integers in
an array.
37
CONTROL FLOW GRAPH
V(G)=6
38
PATHS IN A CONTROL FLOW
GRAPH
A few paths in Figure 4.7. (Table 4.1)
 Path 1: 1-2-3(F)-10(T)-12-13
 Path 2: 1-2-3(F)-10(F)-11-13
 Path 3: 1-2-3(T)-4(T)-5-6(T)-7(T)-8-9-3(F)-10(T)-12-13
 Path 4: 1-2-3(T)-4(T)-5-6-7(T)-8-9-3(T)-4(T)-5-6(T)-7(T)-8-9-
3(F)-10(T)-12- 13
CONTROL STRUCTURE
TESTING
►Condition testing — a test case design
method that exercises the logical
conditions contained in a program
module
►Data flow testing — selects test paths
of a program according to the locations
of definitions and uses of variables in
the program
LOOP TESTING
Nested
Loops
Concatenated
Loops Unstructured
Loops
Simple
loop
LOOP TESTING: SIMPLE
LOOPS
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through
the loop
where n is the maximum number
of allowable passes
LOOP TESTING: NESTED
LOOPS
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
Nested Loops
Concatenated Loops
UNIT TESTING FRAMEWORKS-
TOOLS
JUnit
NUnit
BLACK-BOX TESTING
requirements
events
input
output
BLACK-BOX TESTING
►How is functional validity tested?
►How is system behavior and performance
tested?
►What classes of input will make good test
cases?
BLACK-BOX TESTING
►Is the system particularly sensitive to certain
input values?
►How are the boundaries of a data class
isolated?
►What data rates and data volume can the
system tolerate?
►What effect will specific combinations of data
have on system operation?
EQUIVALENCE PARTITIONING
user
queries
mouse
picks
output
formats
prompts
FK
input
data
SAMPLE EQUIVALENCE
CLASSES
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
Valid data
Invalid data
BOUNDARY VALUE
ANALYSIS
user
queries
mouse
picks
output
formats
prompts
FK
input
data
output
domain
input domain
EXAMPLE
Inputs or Outs puts Valid equivalence
class
Invalid equivalence
class
Enter a number Between 1 and 99 0, >99, an
expression that
yields invalid
number, -ve
numbers, letters
and non-numerics
Enter first letter of a
name
First letter is capital
letter
First letter is lower
case letter
Not a letter
Draw a line From 1 dot width to
4 inches long
No line, longer than
4 inches, not a line
TEST TYPES
Functional tests
Algorithmic tests
Positive tests
Negative tests
Usability tests
Boundary tests
Startup/shutdown tests
Platform tests
Load/stress tests
TEST RESULT CATEGORIES
Pass
Fail
Acceptable
Tolerable
Intolerable
Test error
Break/show stopper
THE V MODEL FOR TESTING
Figure 1.7: Development and testing phases in the V model
STRATEGIC APPROACH
To perform effective testing, you should conduct effective
technical reviews. By doing this, many errors will be eliminated
before testing commences.
Testing begins at the component level and works "outward"
toward the integration of the entire computer-based system.
Different testing techniques are appropriate for different
software engineering approaches and at different points in
time.
Testing is conducted by the developer of the software and (for
large projects) an independent test group.
Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy.
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
55
WHO TESTS THE
SOFTWARE?
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
56
developer independent tester
Understands the system
but, will test "gently"
and, is driven by "delivery"
Must learn about the system,
but, will attempt to break it
and, is driven by quality
TESTING STRATEGY
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
57
System engineering
Analysis modeling
Design modeling
Code generation Unit test
Integration test
Validation test
System test
TESTING
STRATEGY
We begin by ‘testing-in-the-small’ and
move toward ‘testing-in-the-large’
For conventional software
 The module (component) is our initial focus
 Integration of modules follows
For OO software
 our focus when “testing in the small” changes from an
individual module (the conventional view) to an OO
class that encompasses attributes and operations and
implies communication and collaboration
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
58
STRATEGIC ISSUES
Specify product requirements in a quantifiable manner
long before testing commences.
State testing objectives explicitly.
Understand the users of the software and develop a
profile for each user category.
Develop a testing plan that emphasizes “rapid cycle
testing.”
Build “robust” software that is designed to test itself
Use effective technical reviews as a filter prior to testing
Conduct technical reviews to assess the test strategy and
test cases themselves.
Develop a continuous improvement approach for the
testing process.
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
59
UNIT TESTING
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
60
module
to be
tested
test cases
results
software
engineer
UNIT TESTING
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
61
interface
local data structures
boundary conditions
independent paths
error handling paths
module
to be
tested
test cases
UNIT TEST
ENVIRONMENT
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
62
Module
stub stub
driver
RESULTS
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
INTEGRATION TESTING
STRATEGIES
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
63
Options:
• the “big bang” approach
• an incremental construction strategy
TOP DOWN
INTEGRATION
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
64
top module is tested with
stubs
stubs are replaced one at
a time, "depth first"
as new modules are integrated,
some subset of tests is re-run
A
B
C
D E
F G
BOTTOM-UP
INTEGRATION
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
65
drivers are replaced one at a
time, "depth first"
worker modules are grouped into
builds and integrated
A
B
C
D E
F G
cluster
SANDWICH TESTING
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
66
Top modules are
tested with stubs
Worker modules are grouped into
builds and integrated
A
B
C
D E
F G
cluster
REGRESSION TESTING
Regression testing is the re-execution of some subset of tests
that have already been conducted to ensure that changes have
not propagated unintended side effects
Whenever software is corrected, some aspect of the software
configuration (the program, its documentation, or the data that
support it) is changed.
Regression testing helps to ensure that changes (due to
testing or for other reasons) do not introduce unintended
behavior or additional errors.
Regression testing may be conducted manually, by re-
executing a subset of all test cases or using automated
capture/playback tools.
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
67
SMOKE TESTING
A common approach for creating “daily builds” for
product software
Smoke testing steps:
 Software components that have been translated into code
are integrated into a “build.”
 A build includes all data files, libraries, reusable modules, and
engineered components that are required to implement one or
more product functions.
 A series of tests is designed to expose errors that will keep
the build from properly performing its function.
 The intent should be to uncover “show stopper” errors that have
the highest likelihood of throwing the software project behind
schedule.
 The build is integrated with other builds and the entire
product (in its current form) is smoke tested daily.
 The integration approach may be top down or bottom up.
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
68
OBJECT-ORIENTED TESTING
begins by evaluating the correctness
and consistency of the analysis and
design models
testing strategy changes
 the concept of the ‘unit’ broadens due to
encapsulation
 integration focuses on classes and their execution
across a ‘thread’ or in the context of a usage scenario
 validation uses conventional black box methods
test case design draws on conventional
methods, but also encompasses special
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
69
BROADENING THE VIEW OF
“TESTING”
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
70
It can be argued that the review of OO analysis and design
models is especially useful because the same semantic
constructs (e.g., classes, attributes, operations, messages)
appear at the analysis, design, and code level. Therefore, a
problem in the definition of class attributes that is uncovered
during analysis will circumvent side effects that might occur
if the problem were not discovered until design or code (or
even the next iteration of analysis).
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 71
CLASS MODEL CONSISTENCY
Revisit the CRC model and the object-relationship model.
Inspect the description of each CRC index card to determine if a
delegated responsibility is part of the collaborator’s definition.
Invert the connection to ensure that each collaborator that is asked
for service is receiving requests from a reasonable source.
Using the inverted connections examined in the preceding step,
determine whether other classes might be required or whether
responsibilities are properly grouped among the classes.
Determine whether widely requested responsibilities might be
combined into a single responsibility.
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 72
OO TESTING STRATEGIES
Unit testing the concept of the unit changes
 the smallest testable unit is the encapsulated class
 a single operation can no longer be tested in isolation (the conventional
view of unit testing) but rather, as part of a class
Integration Testing
 Thread-based testing integrates the set of classes required to respond to one input
or event for the system
 Use-based testing begins the construction of the system by testing those classes
(called independent classes) that use very few (if any) of server classes. After the
independent classes are tested, the next layer of classes, called dependent
classes
 Cluster testing [McG94] defines a cluster of collaborating classes (determined by
examining the CRC and object-relationship model) is exercised by designing test
cases that attempt to uncover errors in the collaborations.
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 73
OO TESTING STRATEGIES
Validation Testing
 details of class connections disappear
 draw upon use cases (Chapters 5 and 6) that are part of the
requirements model
 Conventional black-box testing methods (Chapter 18) can be
used to drive validation tests
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 74
OOT METHODS
Berard [Ber93] proposes the following approach:
1. Each test case should be uniquely identified and should be explicitly
associated with the class to be tested,
2. The purpose of the test should be stated,
3. A list of testing steps should be developed for each test and should
contain [BER94]:
a. a list of specified states for the object that is to be tested
b. a list of messages and operations that will be exercised as
a consequence of the test
c. a list of exceptions that may occur as the object is tested
d. a list of external conditions (i.e., changes in the environment
external to the software that must exist in order to properly
conduct the test)
e. supplementary information that will aid in understanding or
implementing the test.
These slides are designed to accompany
Software Engineering: A Practitioner’s
Practitioner’s Approach, 7/e (McGraw-Hill
(McGraw-Hill 2009). Slides copyright 2009 by
75
TESTING METHODS
Fault-based testing
 The tester looks for plausible faults (i.e., aspects of the
implementation of the system that may result in defects). To
determine whether these faults exist, test cases are designed to
exercise the design or code.
Class Testing and the Class Hierarchy
 Inheritance does not obviate the need for thorough testing of all
derived classes. In fact, it can actually complicate the testing
process.
Scenario-Based Test Design
 Scenario-based testing concentrates on what the user does, not what
the product does. This means capturing the tasks (via use-cases)
that the user has to perform, then applying them and their variants
as tests.
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 76
OOT METHODS: RANDOM TESTING
Random testing
 identify operations applicable to a class
 define constraints on their use
 identify a minimum test sequence
 an operation sequence that defines the minimum life history of the
class (object)
 generate a variety of random (but valid) test
sequences
 exercise other (more complex) class instance life histories
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 77
OOT METHODS: PARTITION TESTIN
Partition Testing
 reduces the number of test cases required to test a class in
much the same way as equivalence partitioning for
conventional software
 state-based partitioning
 categorize and test operations based on their ability to change the
state of a class
 attribute-based partitioning
 categorize and test operations based on the attributes that they use
 category-based partitioning
 categorize and test operations based on the generic function each
performs
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 78
OOT METHODS: INTER-CLASS TESTING
Inter-class testing
 For each client class, use the list of class operators to
generate a series of random test sequences. The
operators will send messages to other server classes.
 For each message that is generated, determine the
collaborator class and the corresponding operator in
the server object.
 For each operator in the server object (that has been
invoked by messages sent from the client object),
determine the messages that it transmits.
 For each of the messages, determine the next level of
operators that are invoked and incorporate these into
the test sequence
These slides are designed to
accompany Software
Engineering: A Practitioner’s
Practitioner’s Approach, 7/e
Approach, 7/e (McGraw-Hill 79
OOT METHODS: BEHAVIOR
TESTING
empty
acct
open setupAccnt
set up
acct
deposit
(initial)
working
acct
withdrawal
(final)
dead
acct close
nonworking
acct
deposit
withdraw
balance
credit
accntInfo
Figure 14.3 State diagram for Account class (adapted from [ KIR94])
The tests to be designed
should achieve all state
coverage [KIR94]. That is,
the operation sequences
should cause the Account
class to make transition
through all allowable
states
WEB APP TESTING
81
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
TESTING QUALITY
DIMENSIONS-I
Content is evaluated at both a syntactic and
semantic level.
 syntactic level—spelling, punctuation and grammar
are assessed for text-based documents.
 semantic level—correctness (of information
presented), consistency (across the entire content
object and related objects) and lack of ambiguity
are all assessed.
Function is tested for correctness,
instability, and general conformance to
appropriate implementation standards
(e.g.,Java or XML language standards).
Structure is assessed to ensure that it
 properly delivers WebApp content and function
 is extensible
82
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
TESTING QUALITY
DIMENSIONS-II
Usability is tested to ensure that each
category of user
 is supported by the interface
 can learn and apply all required navigation syntax
and semantics
Navigability is tested to ensure that
 all navigation syntax and semantics are exercised to
uncover any navigation errors (e.g., dead links,
improper links, erroneous links).
Performance is tested under a variety of
operating conditions, configurations, and
loading to ensure that
 the system is responsive to user interaction
83
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
TESTING QUALITY DIMENSIONS-
III
Compatibility is tested by executing the
WebApp in a variety of different host
configurations on both the client and server
sides.
 The intent is to find errors that are specific to a unique host
configuration.
Interoperability is tested to ensure that the
WebApp properly interfaces with other
applications and/or databases.
Security is tested by assessing potential
vulnerabilities and attempting to exploit
each.
84
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
ERRORS IN A WEBAPP
Because many types of WebApp tests uncover problems that are first evidenced on
the client side, you often see a symptom of the error, not the error itself.
Because a WebApp is implemented in a number of different configurations and
within different environments, it may be difficult or impossible to reproduce an
error outside the environment in which the error was originally encountered.
Although some errors are the result of incorrect design or improper HTML (or other
programming language) coding, many errors can be traced to the WebApp
configuration.
Because WebApps reside within a client/server architecture, errors can be difficult
to trace across three architectural layers: the client, the server, or the network itself.
Some errors are due to the static operating environment (i.e., the specific
configuration in which testing is conducted), while others are attributable to the
dynamic operating environment (i.e., instantaneous resource loading or time-related
errors).
85
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
WEBAPP TESTING STRATEGY-I
The content model for the WebApp is
reviewed to uncover errors.
The interface model is reviewed to
ensure that all use-cases can be
accommodated.
The design model for the WebApp is
reviewed to uncover navigation errors.
The user interface is tested to uncover
errors in presentation and/or navigation
mechanics.
Selected functional components are unit
86
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
WEBAPP TESTING STRATEGY-II
Navigation throughout the architecture is tested.
The WebApp is implemented in a variety of different
environmental configurations and is tested for compatibility
with each configuration.
Security tests are conducted in an attempt to exploit
vulnerabilities in the WebApp or within its environment.
Performance tests are conducted.
The WebApp is tested by a controlled and monitored
population of end-users
 the results of their interaction with the system are evaluated for
content and navigation errors, usability concerns, compatibility
concerns, and WebApp reliability and performance.
87
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
THE TESTING PROCESS
Interface
design
Aesthetic design
Content design
Navigation design
Architecture design
Component design
user
technology
Co nt ent
Test ing
Int erf ace
T est ing
Comp o nent
T est ing
Navig at io n
T est ing
Perf ormance
T est ing
Conf ig urat io n
T est ing
Securit y
T est ing
88
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
CONTENT TESTING
Content testing has three important
objectives:
 to uncover syntactic errors (e.g., typos, grammar
mistakes) in text-based documents, graphical
representations, and other media
 to uncover semantic errors (i.e., errors in the accuracy
or completeness of information) in any content object
presented as navigation occurs, and
 to find errors in the organization or structure of
content that is presented to the end-user.
89
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
ASSESSING CONTENT
SEMANTICS
Is the information factually accurate?
Is the information concise and to the point?
Is the layout of the content object easy for the user to understand?
Can information embedded within a content object be found easily?
Have proper references been provided for all information derived from
other sources?
Is the information presented consistent internally and consistent with
information presented in other content objects?
Is the content offensive, misleading, or does it open the door to
litigation?
Does the content infringe on existing copyrights or trademarks?
Does the content contain internal links that supplement existing
content? Are the links correct?
Does the aesthetic style of the content conflict with the aesthetic style
of the interface?
90
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
DATABASE TESTING
client layer - user int erface
server layer - WebApp
server layer - dat a t ransformat ion
dat abase layer - dat a access
server layer - dat a management
dat abase
HTML scripts
user data SQL
user data
SQL
raw data
Tests are defined for
each layer
91
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
USER INTERFACE TESTING
Interface features are tested to ensure that design rules,
aesthetics, and related visual content is available for the
user without error.
Individual interface mechanisms are tested in a manner
that is analogous to unit testing.
Each interface mechanism is tested within the context of
a use-case or NSU (navigation semantic units) for a
specific user category.
The complete interface is tested against selected use-
cases and NSUs to uncover errors in the semantics of the
interface.
The interface is tested within a variety of environments
(e.g., browsers) to ensure that it will be compatible.
92
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
TESTING INTERFACE
MECHANISMS-I
Links—navigation mechanisms that link the user to some
other content object or function.
Forms—a structured document containing blank fields
that are filled in by the user. The data contained in the
fields are used as input to one or more WebApp
functions.
Client-side scripting—a list of programmed commands
in a scripting language (e.g., Javascript) that handle
information input via forms or other user interactions
Dynamic HTML—leads to content objects that are
manipulated on the client side using scripting or
cascading style sheets (CSS).
Client-side pop-up windows—small windows that pop-
up without user interaction. These windows can be
content-oriented and may require some form of user
93
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
TESTING INTERFACE
MECHANISMS-II
CGI scripts—a common gateway interface (CGI) script
implements a standard method that allows a Web server to
interact dynamically with users (e.g., a WebApp that contains
forms may use a CGI script to process the data contained in the
form once it is submitted by the user).
Streaming content—rather than waiting for a request from the
client-side, content objects are downloaded automatically from
the server side. This approach is sometimes called “push”
technology because the server pushes data to the client.
Cookies—a block of data sent by the server and stored by a
browser as a consequence of a specific user interaction. The
content of the data is WebApp-specific (e.g., user identification
data or a list of items that have been selected for purchase by
the user).
Application specific interface mechanisms—include one or
more “macro” interface mechanisms such as a shopping cart,
credit card processing, or a shipping cost calculator.
94
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
USABILITY TESTS
Design by WebE team … executed by end-users
Testing sequence …
 Define a set of usability testing categories and identify goals for
each.
 Design tests that will enable each goal to be evaluated.
 Select participants who will conduct the tests.
 Instrument participants’ interaction with the WebApp while
testing is conducted.
 Develop a mechanism for assessing the usability of the WebApp
different levels of abstraction:
 the usability of a specific interface mechanism (e.g., a form) can
be assessed
 the usability of a complete Web page (encompassing interface
mechanisms, data objects and related functions) can be evaluated
 the usability of the complete WebApp can be considered.
95
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
COMPATIBILITY TESTING
Compatibility testing is to define a set of “commonly
encountered” client side computing configurations and their
variants
Create a tree structure identifying
 each computing platform
 typical display devices
 the operating systems supported on the platform
 the browsers available
 likely Internet connection speeds
 similar information.
Derive a series of compatibility validation tests
 derived from existing interface tests, navigation tests, performance
tests, and security tests.
 intent of these tests is to uncover errors or execution problems that
can be traced to configuration differences.
96
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
COMPONENT-LEVEL TESTING
Focuses on a set of tests that attempt to
uncover errors in WebApp functions
Conventional black-box and white-box
test case design methods can be used
Database testing is often an integral
part of the component-testing regime
97
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
NAVIGATION TESTING
The following navigation mechanisms should be tested:
 Navigation links—these mechanisms include internal links within the
WebApp, external links to other WebApps, and anchors within a
specific Web page.
 Redirects—these links come into play when a user requests a non-
existent URL or selects a link whose destination has been removed
or whose name has changed.
 Bookmarks—although bookmarks are a browser function, the
WebApp should be tested to ensure that a meaningful page title can
be extracted as the bookmark is created.
 Frames and framesets—tested for correct content, proper layout and
sizing, download performance, and browser compatibility
 Site maps—Each site map entry should be tested to ensure that the
link takes the user to the proper content or functionality.
 Internal search engines—Search engine testing validates the
accuracy and completeness of the search, the error-handling
properties of the search engine, and advanced search features
98
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
TESTING NAVIGATION
SEMANTICS-I
Is the NSU achieved in its entirety without error?
Is every navigation node (defined for a NSU) reachable within the
context of the navigation paths defined for the NSU?
If the NSU can be achieved using more than one navigation path, has
every relevant path been tested?
If guidance is provided by the user interface to assist in navigation,
are directions correct and understandable as navigation proceeds?
Is there a mechanism (other than the browser ‘back’ arrow) for
returning to the preceding navigation node and to the beginning of
the navigation path.
Do mechanisms for navigation within a large navigation node (i.e., a
long web page) work properly?
If a function is to be executed at a node and the user chooses not to
provide input, can the remainder of the NSU be completed?
99
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
TESTING NAVIGATION
SEMANTICS-II
If a function is executed at a node and an error in function
processing occurs, can the NSU be completed?
Is there a way to discontinue the navigation before all nodes
have been reached, but then return to where the navigation was
discontinued and proceed from there?
Is every node reachable from the site map? Are node names
meaningful to end-users?
If a node within an NSU is reached from some external source,
is it possible to process to the next node on the navigation
path. Is it possible to return to the previous node on the
navigation path?
Does the user understand his location within the content
architecture as the NSU is executed?
SELF READING
http://asktog.com/atc/principles-of-
interaction-design/
101
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
CONFIGURATION TESTING
Server-side
 Is the WebApp fully compatible with the server OS?
 Are system files, directories, and related system data created
correctly when the WebApp is operational?
 Do system security measures (e.g., firewalls or encryption) allow the
WebApp to execute and service users without interference or
performance degradation?
 Has the WebApp been tested with the distributed server configuration
(if one exists) that has been chosen?
 Is the WebApp properly integrated with database software? Is the
WebApp sensitive to different versions of database software?
 Do server-side WebApp scripts execute properly?
 Have system administrator errors been examined for their affect on
WebApp operations?
 If proxy servers are used, have differences in their configuration been
addressed with on-site testing?
102
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
CONFIGURATION TESTING
Client-side
 Hardware—CPU, memory, storage and printing devices
 Operating systems—Linux, Macintosh OS, Microsoft Windows,
a mobile-based OS
 Browser software—Internet Explorer, Mozilla/Netscape,
Opera, Safari, and others
 User interface components—Active X, Java applets and others
 Plug-ins—QuickTime, RealPlayer, and many others
 Connectivity—cable, DSL, regular modem, T1
The number of configuration variables must be reduced
to a manageable number
103
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
SECURITY TESTING
Designed to probe vulnerabilities of the
client-side environment, the network
communications that occur as data are
passed from client to server and back
again, and the server-side environment
On the client-side, vulnerabilities can often
be traced to pre-existing bugs in browsers,
e-mail programs, or communication
software.
On the server-side, vulnerabilities include
denial-of-service attacks and malicious
scripts that can be passed along to the
client-side or used to disable server
104
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
PERFORMANCE TESTING
Does the server response time degrade to a point where
it is noticeable and unacceptable?
At what point (in terms of users, transactions or data
loading) does performance become unacceptable?
What system components are responsible for
performance degradation?
What is the average response time for users under a
variety of loading conditions?
Does performance degradation have an impact on
system security?
Is WebApp reliability or accuracy affected as the load on
the system grows?
What happens when loads that are greater than
maximum server capacity are applied?
105
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
LOAD TESTING
The intent is to determine how the WebApp
and its server-side environment will
respond to various loading conditions
N, the number of concurrent users
T, the number of on-line transactions per unit of time
D, the data load processed by the server per transaction
Overall throughput, P, is computed in the
following manner:
 P = N x T x D
106
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
STRESS TESTING
Does the system degrade ‘gently’ or does the server shut down as
capacity is exceeded?
Does server software generate “server not available” messages? More
generally, are users aware that they cannot reach the server?
Does the server queue requests for resources and empty the queue
once capacity demands diminish?
Are transactions lost as capacity is exceeded?
Is data integrity affected as capacity is exceeded?
What values of N, T, and D force the server environment to fail? How
does failure manifest itself? Are automated notifications sent to
technical support staff at the server site?
If the system does fail, how long will it take to come back on-line?
Are certain WebApp functions (e.g., compute intensive functionality,
data streaming capabilities) discontinued as capacity reaches the 80
or 90 percent level?
HIGH ORDER
TESTING
Validation testing
 Focus is on software requirements
System testing
 Focus is on system integration
Alpha/Beta testing
 Focus is on customer usage
Recovery testing
 forces the software to fail in a variety of ways and
verifies that recovery is properly performed
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
107
HIGH ORDER
TESTING
Security testing
 verifies that protection mechanisms built into a
system will, in fact, protect it from improper
penetration
Stress testing
 executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume
Performance Testing
 test the run-time performance of software within the
context of an integrated system
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
108
TESTING VS. DEBUGGING
DEBUGGING: A DIAGNOSTIC
PROCESS
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
110
THE DEBUGGING PROCESS
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
111
DEBUGGING PROCESS
The debugging process begins with the
execution of a test case.
Results are assessed and a lack of
correspondence between expected and actual
performance is encountered.
The debugging process will always have one
of two outcomes:
 1. The cause will be found and corrected, or
 2. The cause will not be found.
In the latter case, the person performing
debugging may suspect a cause, design a test
case to help validate that suspicion, and work
toward error correction in an iterative fashion.
DEBUGGING EFFORT
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
113
time required
to diagnose the
symptom and
determine the
cause
time required
to correct the error
and conduct
regression tests
SYMPTOMS & CAUSES
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
114
symptom
cause
symptom and cause may be
geographically separated
symptom may disappear when
another problem is fixed
cause may be due to a
combination of non-errors
cause may be due to a system
or compiler error
cause may be due to
assumptions that everyone
believes
symptom may be intermittent
CONSEQUENCES OF BUGS
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
115
damage
mild
annoying
disturbing
serious
extreme
catastrophic
infectious
Bug Type
Bug Categories: function-related bugs,
system-related bugs, data bugs, coding bugs,
design bugs, documentation bugs, standards
violations, etc.
DEBUGGING
TECHNIQUES
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
116
brute force / testing
backtracking
induction
deduction
CORRECTING THE ERROR
Is the cause of the bug reproduced in another part of the program? In
many situations, a program defect is caused by an erroneous pattern
of logic that may be reproduced elsewhere.
What "next bug" might be introduced by the fix I'm about to make?
Before the correction is made, the source code (or, better, the design)
should be evaluated to assess coupling of logic and data structures.
What could we have done to prevent this bug in the first place? This
question is the first step toward establishing a statistical software
quality assurance approach. If you correct the process as well as the
product, the bug will be removed from the current program and may
be eliminated from all future programs.
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
117
FINAL THOUGHTS
Think -- before you act to correct
Use tools to gain additional insight
If you’re at an impasse, get help from
someone else
Once you correct the bug, use
regression testing to uncover any side
effects
THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A
PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009
BY ROGER PRESSMAN.
118
SOFTWARE TESTING 119
PARTS OF A TEST CASE
1
1. Test Case No.
2. Test case Description
3. Pre-Requisites
4. Extra Instructions
5. Dependencies
6. Detailed Description
7. Steps / Actions
SOFTWARE TESTING 120
PARTS OF A TEST CASE
2
8. Expected Results
9. Actual Results
10.Test case Status
test-case-id: Test Case Title
Purpose:
 Short sentence or two about the aspect of the system is being
tested. If this gets too long, break the test case up or put more
information into the feature descriptions.
Prereq:Assumptions that must be met before the test case
can be run. E.g., "logged in", "guest login allowed", "user
testuser exists".
Test Data:List of variables and their possible values used in
the test case. You can list specific values or describe value
ranges. The test case should be performed once for each
combination of values. These values are written in set
notation, one per line. E.g.: loginID = {Valid loginID, invalid
loginID, valid email, invalid email, empty}password = {valid,
invalid, empty}
Steps:Steps to carry out the test. See step
formating rules below.
 visit LoginPage
 enter userID
 enter password
 click login
 see the terms of use page
 click agree radio button at page bottom
 click submit button
 see PersonalPage
 verify that welcome message is correct username
Format of test steps
Each step can be written very briefly using the following keywords:
login [as ROLE-OR-USER]
 Log into the system with a given user or a user of the given type. Usually only stated
explicitly when the test case depends on the permissions of a particular role or involves a
workflow between different users.
visit LOCATION
 Visit a page or screen. For web applications, LOCATION may be a hyperlink. The location
should be a well-known starting point (e.g., the Login screen), drilling down to specific pages
should be part of the test.
enter FIELD-NAME [as VALUE] [in SCREEN-LOCATION]
 Fill in a named form field. VALUE can be a literal value or the name of a variable defined in
the "Test Data" section. The FIELD-NAME itself can be a variable name when the UI field for
that value is clear from context, e.g., "enter password".
enter FIELDS
 Fill in all fields in a form when their values are clear from context or when their specific
values are not important in this test case.
click "LINK-LABEL" [in SCREEN-LOCATION]
 Follow a labeled link or press a button. The screen location can be a predefined panel name
or English phrase. Predefined panel names are based on GUI class names, master template
names, or titles of boxes on the page.
click BUTTON-NAME [in SCREEN-LOCATION]
 Press a named button. This step should always be followed by a "see" step to check
the results.
see SCREEN-OR-PAGE
 The tester should see the named GUI screen or web page. The general correctness of
the page should be testable based on the feature description.
verify CONDITION
 The tester should see that the condition has been satisfied. This type of step usually
follows a "see" step at the end of the test case.
verify CONTENT [is VALUE]
 The tester should see the named content on the current page, the correct values
should be clear from the test data, or given explicitly. This type of step usually
follows a "see" step at the end of the test case.
perform TEST-CASE-NAME
 This is like a subroutine call. The tester should perform all the steps of the named
test case and then continue on to the next step of this test case.
Every test case must include a verify step at the end so that the expected
output is very clear. A test case can have multiple verify steps in the middle
or at the end. Having multiple verify steps can be useful if you want a
smaller number of long tests rather than a large number of short tests.
GUIDELINE FOR TEST CASE
WRITING
1. Test Cases need to be simple
2. Create Test Case with End User in mind
3. Avoid test case repetition
4. Do not Assume
5. Ensure 100% Coverage
6. Test Cases must be identifiable.
7. Peer Review.
SOFTWARE TESTING 125
SOFTWARE TESTING 126
TEST CASE TEMPLATE
Form: TestCase Template
--handouts --
EXERCISE
Write test cases for
 Login Screen
 Email Sending
 Pen
 Printer
SOFTWARE TESTING 127
TEST PLANNING
The Test Plan – defines the scope of the
work to be performed
The Test Procedure – a container
document that holds all of the individual
tests (test scripts) that are to be
executed
The Test Report – documents what
occurred when the test scripts were run
TEST PLAN
Questions to be answered:
 How many tests are needed?
 How long will it take to develop those tests?
 How long will it take to execute those tests?
Topics to be addressed:
 Test estimation
 Test development and informal validation
 Validation readiness review and formal validation
 Test completion criteria
SOFTWARE TESTING 130
TEST PLANNING
Shows what you are planning to do.
Form: TestPlan Template
--handouts --
SOFTWARE TESTING 131
TEST REPORT
Shows what happened when you ran
your tests.
Form: TestReport Template
--handouts --
TEST PROCEDURE
Collection of test scripts
An integral part of each test script is the
expected results
The Test Procedure document should
contain an unexecuted, clean copy of
every test so that the tests may be more
easily reused
TEST REPORT
Completed copy of each test script with
evidence that it was executed (i.e., dated with
the signature of the person who ran the test)
Copy of each SPR (Software Problem Report)
showing resolution
List of open or unresolved SPRs
Identification of SPRs found in each baseline
along with total number of SPRs in each
baseline
Regression tests executed for each software
baseline
VALIDATION TEST PLAN
IEEE – STANDARD 1012-1998
1. Overview
a. Organization
b. Tasks and Schedules
c. Responsibilities
d. Tools, Techniques, Methods
2. Processes
a. Management
b. Acquisition
c. Supply
d. Development
e. Operation
f. Maintenance
VALIDATION TEST PLAN
IEEE – STANDARD 1012-1998 (CONT’D)
3. Reporting Requirements
4. Administrative Requirements
5. Documentation Requirements
6. Resource Requirements
7. Completion Criteria
User interface testing
Testing for android apps
TOOLS
Defect Reporting and tracking
 Bugzilla
 Jira
Test Management tools (Functional and
UI testing)
 Silenium
 Winrunner
Load/Performance Testing
 Loadrunner
COMMON SOFTWARE
ERRORS
Check Appendix A of
“Testing Computer software” by Kaner,
Falk and Nguyen, 2nd Edition
COMMON SOFTWARE
ERRORS
User interface errors
Boundary related errors
Calculation errors
Initial and later state errors
Control flow errors
Race conditions
Errors in handling or interpreting data
Load conditions
Source, version, and ID control
Testing errors
COMMON SOFTWARE
ERRORS
Boundary in time (Performance)
Boundary in loops
Boundary in memory
(capacity/performance)
Boundary within data structure
Hardware related (capacity/Load/stress)
Impossible number of parenthesis
(expression error)
DEFECT
What is Defect?
A defect is a variance from a desired
product attribute.
DEFECT LOG
1. Defect ID number
2. Descriptive defect name and type
3. Source of defect – test case or other source
4. Defect severity
5. Defect Priority
6. Defect status (e.g. New, open, fixed, closed, reopen,
reject)
7. Date and time tracking for either the most recent
status change, or for each change in the status.
8. Detailed description, including the steps necessary to
reproduce the defect.
9. Component or program where defect was found
(Product version)
10. Screen prints, logs, etc. that will aid the developer in
resolution process.
11. Person assigned to research and/or corrects the
defect.
Severity Vs Priority
Severity
Factor that shows how bad the defect is and
the impact it has on the product
Priority
Based upon input from users regarding
which defects are most important to them,
and be fixed first.
SEVERITY LEVELS
Critical
Major / High
Average / Medium
Minor / low
Cosmetic defects
DEFECT LIFE CYCLE
TEST ESTIMATION
Number of test cases required is based on:
 Testing all functions and features in the SRS
 Including an appropriate number of ALAC (Act Like A
Customer) tests including:
 Do it wrong
 Use wrong or illegal combination of inputs
 Don’t do enough
 Do nothing
 Do too much
 Achieving some test coverage goal
 Achieving a software reliability goal
CONSIDERATIONS IN
TEST ESTIMATION
Test Complexity – It is better to have many
small tests that a few large ones.
Different Platforms – Does testing need to be
modified for different platforms, operating
systems, etc.
Automated or Manual Tests – Will automated
tests be developed? Automated tests take
more time to create but do not require
human intervention to run.
ESTIMATING TESTS
REQUIRED
SRS
Reference
Estimated
Number of
Tests
Required
Notes
4.1.1 3 2 positive and 1 negative test
4.1.2 2 2 automated tests
4.1.3 4 4 manual tests
4.1.4 5 1 boundary condition, 2 error
conditions, 2 usability tests
…
Total 165
ESTIMATED TEST
DEVELOPMENT TIME
Estimated Number of Tests: 165
Average Test Development Time: 3.5
(person-hours/test)
Estimated Test Development Time:
577.5
(person-hours)
ESTIMATED TEST EXECUTION
TIME
Estimated Number of Tests: 165
Average Test Execution Time: 1.5
(person-hours/test)
Estimated Test Execution Time: 247.5
(person-hours)
Estimated Regression Testing (50%): 123.75
(person-hours)
Total Estimated Test Execution Time: 371.25
(person-hours)
SOFTWARE TESTING 152
WHEN TO STOP TESTING?
How is the decision to stop testing
made?
 Coverage
 Limits set by management
 User acceptance
 Contractual
 Fault detection rate threshold
 Meeting a standard
SOFTWARE TESTING 153
REFERENCES
Sommerville, Ian “Software Engineering
6th edition.”
Pressman, Roger “Software Engineering,
A Practioner’s Approach6th edition. ”
Many More…

Software quality assurance,eTesting.pptx

  • 1.
    SOFTWARE TESTING Chapter 10:SQA by Nina gadbole Chapter 17,18,19,20 Software Engineering byR.S. Pressman Chapter 1,3, …more from Software testing and QA by Naik &
  • 2.
    V & V Verificationrefers to the set of tasks that ensure that software correctly implements a specific function. Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements. Boehm [Boe81] states this another way:  Verification: "Are we building the product right?"  Validation: "Are we building the right product?" THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 2
  • 3.
  • 4.
    OBSERVATIONS ABOUT TESTING “Testing isthe process of executing a program with the intention of finding errors.” – Myers “Testing can show the presence of bugs but never their absence.” - Dijkstra
  • 5.
    TESTING Testing is theprocess of exercising a program with the specific intent of finding errors prior to delivery to the end user(IEEE).
  • 6.
    PURPOSE OF TESTING Itdoes work It does not work Reduce the risk of failures Reduce the cost of testing
  • 7.
    GOOD TESTING PRACTICES Agood test case is one that has a high probability of detecting an undiscovered defect, not one that shows that the program works correctly It is impossible to test your own program A necessary part of every test case is a description of the expected result
  • 8.
    GOOD TESTING PRACTICES (CONT’D) Avoidnonreproducible or on-the-fly testing Write test cases for valid as well as invalid input conditions. Thoroughly inspect the results of each test As the number of detected defects in a piece of software increases, the probability of the existence of more undetected defects also increases
  • 9.
    GOOD TESTING PRACTICES (CONT’D) Assignyour best people to testing Ensure that testability is a key objective in your software design Never alter the program to make testing easier Testing, like almost every other activity, must start with objectives
  • 10.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 10 WHAT TESTING SHOWS errors requirements conformance performance an indication of quality
  • 11.
    TESTING LIFE CYCLE(FROM NINA) 1. Decision to enter Test phase 2. Testing process Introduction 3. Test plan preparation 4. Test case development 5. Execution and management of tests 6. Process evaluation and improvement
  • 12.
  • 13.
    TERMINOLOGY (IEEE 829) 3.1design level: The design decomposition of the software item (e.g., system, subsystem, program, or module). 3.2 pass/fail criteria: Decision rules used to determine whether a software item or a software feature passes or fails a test. 3.3 software feature: A distinguishing characteristic of a software item (e.g.,
  • 14.
    TERMINOLOGY 3.5 test: (A) Aset of one or more test cases, or (B) A set of one or more test procedures, or (C) A set of one or more test cases and procedures.
  • 15.
    TERMINOLOGY 3.6 test casespecification: A document specifying inputs, predicted results, and a set of execution conditions for a test item. 3.7 test design specification: A document specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests.
  • 16.
    TERMINOLOGY 3.8 test incidentreport: A document reporting on any event that occurs during the testing process which requires investigation. 3.9 testing: The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item.
  • 17.
    TERMINOLOGY 3.10 test item: Asoftware item which is an object of testing. 3.11 test item transmittal report: A document identifying test items. It contains current status and location information.
  • 18.
    TERMINOLOGY 3.12 test log: Achronological record of relevant details about the execution of tests. 3.13 test plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any
  • 19.
    TERMINOLOGY risks requiring contingencyplanning. 3.14 test procedure specification: A document specifying a sequence of actions for the execution of a test. 3.15 test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items.
  • 20.
    20 WHAT IS ATEST CASE? Test Case is a simple pair of <input, expected outcome> State-less systems: A compiler is a stateless system  Test cases are very simple  Outcome depends solely on the current input State-oriented: ATM is a state oriented system  Test cases are not that simple. A test case may consist of a sequences of <input, expected outcome>  The outcome depends both on the current state of the system and the current input  ATM example:  < check balance, $500.00 >,  < withdraw, “amount?” >,  < $200.00, “$200.00” >,  < check balance, $300.00 >
  • 21.
    21 EXPECTED OUTCOME An outcomeof program execution may include  Value produced by the program  State Change  A sequence of values which must be interpreted together for the outcome to be valid Verify the correctness of program outputs  Generate expected results for the test inputs  Compare the expected results with the actual results of execution of the IUT (implementation under testing)
  • 22.
  • 23.
  • 24.
    EXHAUSTIVE TESTING loop <20 X There are 10 possible paths! If we execute one test per millisecond, it would take 3,170 years to test this program!! 14
  • 25.
    SELECTIVE TESTING loop <20 X Selected path
  • 26.
    26 THE CONCEPT OFCOMPLETE TESTING Complete or exhaustive testing means “There are no undisclosed faults at the end of test phase” Complete testing is near impossible for most of the system  The domain of possible inputs of a program is too large  Valid inputs  Invalid inputs  The design issues may be too complex to completely test  It may not be possible to create all possible execution environments of the system
  • 27.
    TEST CASE DESIGN "Bugs lurkin corners and congregate at boundaries ..." Boris Beizer OBJECTIVE CRITERIA CONSTRAINT to uncover errors in a complete manner with a minimum of effort and time
  • 28.
    WHITE-BOX TESTING ... our goalis to ensure that all statements and conditions have been executed at least once ...
  • 29.
    WHY COVER? logic errors andincorrect assumptions are inversely proportional to a path's execution probability we often believe that a path is not likely to be executed; in fact, reality is often counter intuitive typographical errors are random; it's likely that untested paths will contain some
  • 30.
    CYCLOMATIC COMPLEXITY ►Is asoftware metric that gives the quantitative indication of the logical complexity of software. ►Tells the number of independent paths in basis path testing.
  • 31.
    BASIS PATH TESTING First,we compute the cyclomatic complexity: number of simple decisions + 1 or number of enclosed areas + 1 In this case, V(G) = 4 1 2 3 4 6 7 9 5 1 2 3 4 6 5 9 7 8 8 10 11 Or Edges – Nodes + 2
  • 32.
    BASIS PATH TESTING First, wecompute the cyclomatic complexity: number of simple decisions + 1 or number of enclosed areas + 1 In this case, V(G) = 4
  • 33.
    CYCLOMATIC COMPLEXITY A number ofindustry studies have indicated that the higher V(G), the higher the probability or errors. V(G) modules modules in this range are more error prone
  • 34.
    BASIS PATH TESTING Next, wederive the independent paths: Since V(G) = 4, there are four paths Path 1: 1,2,3,6,7,8 Path 2: 1,2,3,5,7,8 Path 3: 1,2,4,7,8 Path 4: 1,2,4,7,2,4,...7,8 Finally, we derive test cases to exercise these paths. 1 2 3 4 5 6 7 8
  • 35.
    BASIS PATH TESTING NOTES youdon't need a flow chart, but the picture will help when you trace program paths count each simple logical test, compound tests count as 2 or more basis path testing should be applied to critical modules
  • 36.
    36 CONTROL FLOW GRAPH •Example code: ReturnAverage() public static double ReturnAverage(int value[], int AS, int MIN, int MAX){ /* Function: ReturnAverage Computes the average of all those numbers in the input array in the positive range [MIN, MAX]. The maximum size of the array is AS. But, the array size could be smaller than AS in which case the end of input is represented by -999. */ int i, ti, tv, sum; double av; i = 0; ti = 0; tv = 0; sum = 0; while (ti < AS && value[i] != -999) { ti++; if (value[i] >= MIN && value[i] <= MAX) { tv++; sum = sum + value[i]; } i++; } if (tv > 0) av = (double)sum/tv; else av = (double) -999; return (av); } Figure 4.6: A function to compute the average of selected integers in an array.
  • 37.
  • 38.
    38 PATHS IN ACONTROL FLOW GRAPH A few paths in Figure 4.7. (Table 4.1)  Path 1: 1-2-3(F)-10(T)-12-13  Path 2: 1-2-3(F)-10(F)-11-13  Path 3: 1-2-3(T)-4(T)-5-6(T)-7(T)-8-9-3(F)-10(T)-12-13  Path 4: 1-2-3(T)-4(T)-5-6-7(T)-8-9-3(T)-4(T)-5-6(T)-7(T)-8-9- 3(F)-10(T)-12- 13
  • 39.
    CONTROL STRUCTURE TESTING ►Condition testing— a test case design method that exercises the logical conditions contained in a program module ►Data flow testing — selects test paths of a program according to the locations of definitions and uses of variables in the program
  • 40.
  • 41.
    LOOP TESTING: SIMPLE LOOPS Minimumconditions—Simple Loops 1. skip the loop entirely 2. only one pass through the loop 3. two passes through the loop 4. m passes through the loop m < n 5. (n-1), n, and (n+1) passes through the loop where n is the maximum number of allowable passes
  • 42.
    LOOP TESTING: NESTED LOOPS Startat the innermost loop. Set all outer loops to their minimum iteration parameter values. Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their minimum values. Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue this step until the outermost loop has been tested. If the loops are independent of one another then treat each as a simple loop else* treat as nested loops endif* for example, the final loop counter value of loop 1 is used to initialize loop 2. Nested Loops Concatenated Loops
  • 43.
  • 44.
  • 45.
    BLACK-BOX TESTING ►How isfunctional validity tested? ►How is system behavior and performance tested? ►What classes of input will make good test cases?
  • 46.
    BLACK-BOX TESTING ►Is thesystem particularly sensitive to certain input values? ►How are the boundaries of a data class isolated? ►What data rates and data volume can the system tolerate? ►What effect will specific combinations of data have on system operation?
  • 47.
  • 48.
    SAMPLE EQUIVALENCE CLASSES user suppliedcommands responses to system prompts file names computational data physical parameters bounding values initiation values output data formatting responses to error messages graphical data (e.g., mouse picks) data outside bounds of the program physically impossible data proper value supplied in wrong place Valid data Invalid data
  • 49.
  • 50.
    EXAMPLE Inputs or Outsputs Valid equivalence class Invalid equivalence class Enter a number Between 1 and 99 0, >99, an expression that yields invalid number, -ve numbers, letters and non-numerics Enter first letter of a name First letter is capital letter First letter is lower case letter Not a letter Draw a line From 1 dot width to 4 inches long No line, longer than 4 inches, not a line
  • 51.
    TEST TYPES Functional tests Algorithmictests Positive tests Negative tests Usability tests Boundary tests Startup/shutdown tests Platform tests Load/stress tests
  • 52.
  • 53.
    THE V MODELFOR TESTING Figure 1.7: Development and testing phases in the V model
  • 55.
    STRATEGIC APPROACH To performeffective testing, you should conduct effective technical reviews. By doing this, many errors will be eliminated before testing commences. Testing begins at the component level and works "outward" toward the integration of the entire computer-based system. Different testing techniques are appropriate for different software engineering approaches and at different points in time. Testing is conducted by the developer of the software and (for large projects) an independent test group. Testing and debugging are different activities, but debugging must be accommodated in any testing strategy. THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 55
  • 56.
    WHO TESTS THE SOFTWARE? THESESLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 56 developer independent tester Understands the system but, will test "gently" and, is driven by "delivery" Must learn about the system, but, will attempt to break it and, is driven by quality
  • 57.
    TESTING STRATEGY THESE SLIDESARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 57 System engineering Analysis modeling Design modeling Code generation Unit test Integration test Validation test System test
  • 58.
    TESTING STRATEGY We begin by‘testing-in-the-small’ and move toward ‘testing-in-the-large’ For conventional software  The module (component) is our initial focus  Integration of modules follows For OO software  our focus when “testing in the small” changes from an individual module (the conventional view) to an OO class that encompasses attributes and operations and implies communication and collaboration THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 58
  • 59.
    STRATEGIC ISSUES Specify productrequirements in a quantifiable manner long before testing commences. State testing objectives explicitly. Understand the users of the software and develop a profile for each user category. Develop a testing plan that emphasizes “rapid cycle testing.” Build “robust” software that is designed to test itself Use effective technical reviews as a filter prior to testing Conduct technical reviews to assess the test strategy and test cases themselves. Develop a continuous improvement approach for the testing process. THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 59
  • 60.
    UNIT TESTING THESE SLIDESARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 60 module to be tested test cases results software engineer
  • 61.
    UNIT TESTING THESE SLIDESARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 61 interface local data structures boundary conditions independent paths error handling paths module to be tested test cases
  • 62.
    UNIT TEST ENVIRONMENT THESE SLIDESARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 62 Module stub stub driver RESULTS interface local data structures boundary conditions independent paths error handling paths test cases
  • 63.
    INTEGRATION TESTING STRATEGIES THESE SLIDESARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 63 Options: • the “big bang” approach • an incremental construction strategy
  • 64.
    TOP DOWN INTEGRATION THESE SLIDESARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 64 top module is tested with stubs stubs are replaced one at a time, "depth first" as new modules are integrated, some subset of tests is re-run A B C D E F G
  • 65.
    BOTTOM-UP INTEGRATION THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 65 drivers are replaced one at a time, "depth first" worker modules are grouped into builds and integrated A B C D E F G cluster
  • 66.
    SANDWICH TESTING THESE SLIDESARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 66 Top modules are tested with stubs Worker modules are grouped into builds and integrated A B C D E F G cluster
  • 67.
    REGRESSION TESTING Regression testingis the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changed. Regression testing helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors. Regression testing may be conducted manually, by re- executing a subset of all test cases or using automated capture/playback tools. THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 67
  • 68.
    SMOKE TESTING A commonapproach for creating “daily builds” for product software Smoke testing steps:  Software components that have been translated into code are integrated into a “build.”  A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions.  A series of tests is designed to expose errors that will keep the build from properly performing its function.  The intent should be to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule.  The build is integrated with other builds and the entire product (in its current form) is smoke tested daily.  The integration approach may be top down or bottom up. THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 68
  • 69.
    OBJECT-ORIENTED TESTING begins byevaluating the correctness and consistency of the analysis and design models testing strategy changes  the concept of the ‘unit’ broadens due to encapsulation  integration focuses on classes and their execution across a ‘thread’ or in the context of a usage scenario  validation uses conventional black box methods test case design draws on conventional methods, but also encompasses special THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 69
  • 70.
    BROADENING THE VIEWOF “TESTING” THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 70 It can be argued that the review of OO analysis and design models is especially useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level. Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).
  • 71.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 71 CLASS MODEL CONSISTENCY Revisit the CRC model and the object-relationship model. Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition. Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source. Using the inverted connections examined in the preceding step, determine whether other classes might be required or whether responsibilities are properly grouped among the classes. Determine whether widely requested responsibilities might be combined into a single responsibility.
  • 72.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 72 OO TESTING STRATEGIES Unit testing the concept of the unit changes  the smallest testable unit is the encapsulated class  a single operation can no longer be tested in isolation (the conventional view of unit testing) but rather, as part of a class Integration Testing  Thread-based testing integrates the set of classes required to respond to one input or event for the system  Use-based testing begins the construction of the system by testing those classes (called independent classes) that use very few (if any) of server classes. After the independent classes are tested, the next layer of classes, called dependent classes  Cluster testing [McG94] defines a cluster of collaborating classes (determined by examining the CRC and object-relationship model) is exercised by designing test cases that attempt to uncover errors in the collaborations.
  • 73.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 73 OO TESTING STRATEGIES Validation Testing  details of class connections disappear  draw upon use cases (Chapters 5 and 6) that are part of the requirements model  Conventional black-box testing methods (Chapter 18) can be used to drive validation tests
  • 74.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 74 OOT METHODS Berard [Ber93] proposes the following approach: 1. Each test case should be uniquely identified and should be explicitly associated with the class to be tested, 2. The purpose of the test should be stated, 3. A list of testing steps should be developed for each test and should contain [BER94]: a. a list of specified states for the object that is to be tested b. a list of messages and operations that will be exercised as a consequence of the test c. a list of exceptions that may occur as the object is tested d. a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test) e. supplementary information that will aid in understanding or implementing the test.
  • 75.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e (McGraw-Hill (McGraw-Hill 2009). Slides copyright 2009 by 75 TESTING METHODS Fault-based testing  The tester looks for plausible faults (i.e., aspects of the implementation of the system that may result in defects). To determine whether these faults exist, test cases are designed to exercise the design or code. Class Testing and the Class Hierarchy  Inheritance does not obviate the need for thorough testing of all derived classes. In fact, it can actually complicate the testing process. Scenario-Based Test Design  Scenario-based testing concentrates on what the user does, not what the product does. This means capturing the tasks (via use-cases) that the user has to perform, then applying them and their variants as tests.
  • 76.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 76 OOT METHODS: RANDOM TESTING Random testing  identify operations applicable to a class  define constraints on their use  identify a minimum test sequence  an operation sequence that defines the minimum life history of the class (object)  generate a variety of random (but valid) test sequences  exercise other (more complex) class instance life histories
  • 77.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 77 OOT METHODS: PARTITION TESTIN Partition Testing  reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software  state-based partitioning  categorize and test operations based on their ability to change the state of a class  attribute-based partitioning  categorize and test operations based on the attributes that they use  category-based partitioning  categorize and test operations based on the generic function each performs
  • 78.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 78 OOT METHODS: INTER-CLASS TESTING Inter-class testing  For each client class, use the list of class operators to generate a series of random test sequences. The operators will send messages to other server classes.  For each message that is generated, determine the collaborator class and the corresponding operator in the server object.  For each operator in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.  For each of the messages, determine the next level of operators that are invoked and incorporate these into the test sequence
  • 79.
    These slides aredesigned to accompany Software Engineering: A Practitioner’s Practitioner’s Approach, 7/e Approach, 7/e (McGraw-Hill 79 OOT METHODS: BEHAVIOR TESTING empty acct open setupAccnt set up acct deposit (initial) working acct withdrawal (final) dead acct close nonworking acct deposit withdraw balance credit accntInfo Figure 14.3 State diagram for Account class (adapted from [ KIR94]) The tests to be designed should achieve all state coverage [KIR94]. That is, the operation sequences should cause the Account class to make transition through all allowable states
  • 80.
  • 81.
    81 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. TESTING QUALITY DIMENSIONS-I Content is evaluated at both a syntactic and semantic level.  syntactic level—spelling, punctuation and grammar are assessed for text-based documents.  semantic level—correctness (of information presented), consistency (across the entire content object and related objects) and lack of ambiguity are all assessed. Function is tested for correctness, instability, and general conformance to appropriate implementation standards (e.g.,Java or XML language standards). Structure is assessed to ensure that it  properly delivers WebApp content and function  is extensible
  • 82.
    82 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. TESTING QUALITY DIMENSIONS-II Usability is tested to ensure that each category of user  is supported by the interface  can learn and apply all required navigation syntax and semantics Navigability is tested to ensure that  all navigation syntax and semantics are exercised to uncover any navigation errors (e.g., dead links, improper links, erroneous links). Performance is tested under a variety of operating conditions, configurations, and loading to ensure that  the system is responsive to user interaction
  • 83.
    83 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. TESTING QUALITY DIMENSIONS- III Compatibility is tested by executing the WebApp in a variety of different host configurations on both the client and server sides.  The intent is to find errors that are specific to a unique host configuration. Interoperability is tested to ensure that the WebApp properly interfaces with other applications and/or databases. Security is tested by assessing potential vulnerabilities and attempting to exploit each.
  • 84.
    84 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. ERRORS IN A WEBAPP Because many types of WebApp tests uncover problems that are first evidenced on the client side, you often see a symptom of the error, not the error itself. Because a WebApp is implemented in a number of different configurations and within different environments, it may be difficult or impossible to reproduce an error outside the environment in which the error was originally encountered. Although some errors are the result of incorrect design or improper HTML (or other programming language) coding, many errors can be traced to the WebApp configuration. Because WebApps reside within a client/server architecture, errors can be difficult to trace across three architectural layers: the client, the server, or the network itself. Some errors are due to the static operating environment (i.e., the specific configuration in which testing is conducted), while others are attributable to the dynamic operating environment (i.e., instantaneous resource loading or time-related errors).
  • 85.
    85 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. WEBAPP TESTING STRATEGY-I The content model for the WebApp is reviewed to uncover errors. The interface model is reviewed to ensure that all use-cases can be accommodated. The design model for the WebApp is reviewed to uncover navigation errors. The user interface is tested to uncover errors in presentation and/or navigation mechanics. Selected functional components are unit
  • 86.
    86 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. WEBAPP TESTING STRATEGY-II Navigation throughout the architecture is tested. The WebApp is implemented in a variety of different environmental configurations and is tested for compatibility with each configuration. Security tests are conducted in an attempt to exploit vulnerabilities in the WebApp or within its environment. Performance tests are conducted. The WebApp is tested by a controlled and monitored population of end-users  the results of their interaction with the system are evaluated for content and navigation errors, usability concerns, compatibility concerns, and WebApp reliability and performance.
  • 87.
    87 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. THE TESTING PROCESS Interface design Aesthetic design Content design Navigation design Architecture design Component design user technology Co nt ent Test ing Int erf ace T est ing Comp o nent T est ing Navig at io n T est ing Perf ormance T est ing Conf ig urat io n T est ing Securit y T est ing
  • 88.
    88 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. CONTENT TESTING Content testing has three important objectives:  to uncover syntactic errors (e.g., typos, grammar mistakes) in text-based documents, graphical representations, and other media  to uncover semantic errors (i.e., errors in the accuracy or completeness of information) in any content object presented as navigation occurs, and  to find errors in the organization or structure of content that is presented to the end-user.
  • 89.
    89 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. ASSESSING CONTENT SEMANTICS Is the information factually accurate? Is the information concise and to the point? Is the layout of the content object easy for the user to understand? Can information embedded within a content object be found easily? Have proper references been provided for all information derived from other sources? Is the information presented consistent internally and consistent with information presented in other content objects? Is the content offensive, misleading, or does it open the door to litigation? Does the content infringe on existing copyrights or trademarks? Does the content contain internal links that supplement existing content? Are the links correct? Does the aesthetic style of the content conflict with the aesthetic style of the interface?
  • 90.
    90 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. DATABASE TESTING client layer - user int erface server layer - WebApp server layer - dat a t ransformat ion dat abase layer - dat a access server layer - dat a management dat abase HTML scripts user data SQL user data SQL raw data Tests are defined for each layer
  • 91.
    91 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. USER INTERFACE TESTING Interface features are tested to ensure that design rules, aesthetics, and related visual content is available for the user without error. Individual interface mechanisms are tested in a manner that is analogous to unit testing. Each interface mechanism is tested within the context of a use-case or NSU (navigation semantic units) for a specific user category. The complete interface is tested against selected use- cases and NSUs to uncover errors in the semantics of the interface. The interface is tested within a variety of environments (e.g., browsers) to ensure that it will be compatible.
  • 92.
    92 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. TESTING INTERFACE MECHANISMS-I Links—navigation mechanisms that link the user to some other content object or function. Forms—a structured document containing blank fields that are filled in by the user. The data contained in the fields are used as input to one or more WebApp functions. Client-side scripting—a list of programmed commands in a scripting language (e.g., Javascript) that handle information input via forms or other user interactions Dynamic HTML—leads to content objects that are manipulated on the client side using scripting or cascading style sheets (CSS). Client-side pop-up windows—small windows that pop- up without user interaction. These windows can be content-oriented and may require some form of user
  • 93.
    93 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. TESTING INTERFACE MECHANISMS-II CGI scripts—a common gateway interface (CGI) script implements a standard method that allows a Web server to interact dynamically with users (e.g., a WebApp that contains forms may use a CGI script to process the data contained in the form once it is submitted by the user). Streaming content—rather than waiting for a request from the client-side, content objects are downloaded automatically from the server side. This approach is sometimes called “push” technology because the server pushes data to the client. Cookies—a block of data sent by the server and stored by a browser as a consequence of a specific user interaction. The content of the data is WebApp-specific (e.g., user identification data or a list of items that have been selected for purchase by the user). Application specific interface mechanisms—include one or more “macro” interface mechanisms such as a shopping cart, credit card processing, or a shipping cost calculator.
  • 94.
    94 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. USABILITY TESTS Design by WebE team … executed by end-users Testing sequence …  Define a set of usability testing categories and identify goals for each.  Design tests that will enable each goal to be evaluated.  Select participants who will conduct the tests.  Instrument participants’ interaction with the WebApp while testing is conducted.  Develop a mechanism for assessing the usability of the WebApp different levels of abstraction:  the usability of a specific interface mechanism (e.g., a form) can be assessed  the usability of a complete Web page (encompassing interface mechanisms, data objects and related functions) can be evaluated  the usability of the complete WebApp can be considered.
  • 95.
    95 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. COMPATIBILITY TESTING Compatibility testing is to define a set of “commonly encountered” client side computing configurations and their variants Create a tree structure identifying  each computing platform  typical display devices  the operating systems supported on the platform  the browsers available  likely Internet connection speeds  similar information. Derive a series of compatibility validation tests  derived from existing interface tests, navigation tests, performance tests, and security tests.  intent of these tests is to uncover errors or execution problems that can be traced to configuration differences.
  • 96.
    96 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. COMPONENT-LEVEL TESTING Focuses on a set of tests that attempt to uncover errors in WebApp functions Conventional black-box and white-box test case design methods can be used Database testing is often an integral part of the component-testing regime
  • 97.
    97 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. NAVIGATION TESTING The following navigation mechanisms should be tested:  Navigation links—these mechanisms include internal links within the WebApp, external links to other WebApps, and anchors within a specific Web page.  Redirects—these links come into play when a user requests a non- existent URL or selects a link whose destination has been removed or whose name has changed.  Bookmarks—although bookmarks are a browser function, the WebApp should be tested to ensure that a meaningful page title can be extracted as the bookmark is created.  Frames and framesets—tested for correct content, proper layout and sizing, download performance, and browser compatibility  Site maps—Each site map entry should be tested to ensure that the link takes the user to the proper content or functionality.  Internal search engines—Search engine testing validates the accuracy and completeness of the search, the error-handling properties of the search engine, and advanced search features
  • 98.
    98 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. TESTING NAVIGATION SEMANTICS-I Is the NSU achieved in its entirety without error? Is every navigation node (defined for a NSU) reachable within the context of the navigation paths defined for the NSU? If the NSU can be achieved using more than one navigation path, has every relevant path been tested? If guidance is provided by the user interface to assist in navigation, are directions correct and understandable as navigation proceeds? Is there a mechanism (other than the browser ‘back’ arrow) for returning to the preceding navigation node and to the beginning of the navigation path. Do mechanisms for navigation within a large navigation node (i.e., a long web page) work properly? If a function is to be executed at a node and the user chooses not to provide input, can the remainder of the NSU be completed?
  • 99.
    99 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. TESTING NAVIGATION SEMANTICS-II If a function is executed at a node and an error in function processing occurs, can the NSU be completed? Is there a way to discontinue the navigation before all nodes have been reached, but then return to where the navigation was discontinued and proceed from there? Is every node reachable from the site map? Are node names meaningful to end-users? If a node within an NSU is reached from some external source, is it possible to process to the next node on the navigation path. Is it possible to return to the previous node on the navigation path? Does the user understand his location within the content architecture as the NSU is executed?
  • 100.
  • 101.
    101 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. CONFIGURATION TESTING Server-side  Is the WebApp fully compatible with the server OS?  Are system files, directories, and related system data created correctly when the WebApp is operational?  Do system security measures (e.g., firewalls or encryption) allow the WebApp to execute and service users without interference or performance degradation?  Has the WebApp been tested with the distributed server configuration (if one exists) that has been chosen?  Is the WebApp properly integrated with database software? Is the WebApp sensitive to different versions of database software?  Do server-side WebApp scripts execute properly?  Have system administrator errors been examined for their affect on WebApp operations?  If proxy servers are used, have differences in their configuration been addressed with on-site testing?
  • 102.
    102 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. CONFIGURATION TESTING Client-side  Hardware—CPU, memory, storage and printing devices  Operating systems—Linux, Macintosh OS, Microsoft Windows, a mobile-based OS  Browser software—Internet Explorer, Mozilla/Netscape, Opera, Safari, and others  User interface components—Active X, Java applets and others  Plug-ins—QuickTime, RealPlayer, and many others  Connectivity—cable, DSL, regular modem, T1 The number of configuration variables must be reduced to a manageable number
  • 103.
    103 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. SECURITY TESTING Designed to probe vulnerabilities of the client-side environment, the network communications that occur as data are passed from client to server and back again, and the server-side environment On the client-side, vulnerabilities can often be traced to pre-existing bugs in browsers, e-mail programs, or communication software. On the server-side, vulnerabilities include denial-of-service attacks and malicious scripts that can be passed along to the client-side or used to disable server
  • 104.
    104 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. PERFORMANCE TESTING Does the server response time degrade to a point where it is noticeable and unacceptable? At what point (in terms of users, transactions or data loading) does performance become unacceptable? What system components are responsible for performance degradation? What is the average response time for users under a variety of loading conditions? Does performance degradation have an impact on system security? Is WebApp reliability or accuracy affected as the load on the system grows? What happens when loads that are greater than maximum server capacity are applied?
  • 105.
    105 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. LOAD TESTING The intent is to determine how the WebApp and its server-side environment will respond to various loading conditions N, the number of concurrent users T, the number of on-line transactions per unit of time D, the data load processed by the server per transaction Overall throughput, P, is computed in the following manner:  P = N x T x D
  • 106.
    106 THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. STRESS TESTING Does the system degrade ‘gently’ or does the server shut down as capacity is exceeded? Does server software generate “server not available” messages? More generally, are users aware that they cannot reach the server? Does the server queue requests for resources and empty the queue once capacity demands diminish? Are transactions lost as capacity is exceeded? Is data integrity affected as capacity is exceeded? What values of N, T, and D force the server environment to fail? How does failure manifest itself? Are automated notifications sent to technical support staff at the server site? If the system does fail, how long will it take to come back on-line? Are certain WebApp functions (e.g., compute intensive functionality, data streaming capabilities) discontinued as capacity reaches the 80 or 90 percent level?
  • 107.
    HIGH ORDER TESTING Validation testing Focus is on software requirements System testing  Focus is on system integration Alpha/Beta testing  Focus is on customer usage Recovery testing  forces the software to fail in a variety of ways and verifies that recovery is properly performed THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 107
  • 108.
    HIGH ORDER TESTING Security testing verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration Stress testing  executes a system in a manner that demands resources in abnormal quantity, frequency, or volume Performance Testing  test the run-time performance of software within the context of an integrated system THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 108
  • 109.
  • 110.
    DEBUGGING: A DIAGNOSTIC PROCESS THESESLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 110
  • 111.
    THE DEBUGGING PROCESS THESESLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 111
  • 112.
    DEBUGGING PROCESS The debuggingprocess begins with the execution of a test case. Results are assessed and a lack of correspondence between expected and actual performance is encountered. The debugging process will always have one of two outcomes:  1. The cause will be found and corrected, or  2. The cause will not be found. In the latter case, the person performing debugging may suspect a cause, design a test case to help validate that suspicion, and work toward error correction in an iterative fashion.
  • 113.
    DEBUGGING EFFORT THESE SLIDESARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 113 time required to diagnose the symptom and determine the cause time required to correct the error and conduct regression tests
  • 114.
    SYMPTOMS & CAUSES THESESLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 114 symptom cause symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors cause may be due to a system or compiler error cause may be due to assumptions that everyone believes symptom may be intermittent
  • 115.
    CONSEQUENCES OF BUGS THESESLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 115 damage mild annoying disturbing serious extreme catastrophic infectious Bug Type Bug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.
  • 116.
    DEBUGGING TECHNIQUES THESE SLIDES AREDESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 116 brute force / testing backtracking induction deduction
  • 117.
    CORRECTING THE ERROR Isthe cause of the bug reproduced in another part of the program? In many situations, a program defect is caused by an erroneous pattern of logic that may be reproduced elsewhere. What "next bug" might be introduced by the fix I'm about to make? Before the correction is made, the source code (or, better, the design) should be evaluated to assess coupling of logic and data structures. What could we have done to prevent this bug in the first place? This question is the first step toward establishing a statistical software quality assurance approach. If you correct the process as well as the product, the bug will be removed from the current program and may be eliminated from all future programs. THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 117
  • 118.
    FINAL THOUGHTS Think --before you act to correct Use tools to gain additional insight If you’re at an impasse, get help from someone else Once you correct the bug, use regression testing to uncover any side effects THESE SLIDES ARE DESIGNED TO ACCOMPANY SOFTWARE ENGINEERING: A PRACTITIONER’S APPROACH, 7/E (MCGRAW-HILL 2009). SLIDES COPYRIGHT 2009 BY ROGER PRESSMAN. 118
  • 119.
    SOFTWARE TESTING 119 PARTSOF A TEST CASE 1 1. Test Case No. 2. Test case Description 3. Pre-Requisites 4. Extra Instructions 5. Dependencies 6. Detailed Description 7. Steps / Actions
  • 120.
    SOFTWARE TESTING 120 PARTSOF A TEST CASE 2 8. Expected Results 9. Actual Results 10.Test case Status
  • 121.
    test-case-id: Test CaseTitle Purpose:  Short sentence or two about the aspect of the system is being tested. If this gets too long, break the test case up or put more information into the feature descriptions. Prereq:Assumptions that must be met before the test case can be run. E.g., "logged in", "guest login allowed", "user testuser exists". Test Data:List of variables and their possible values used in the test case. You can list specific values or describe value ranges. The test case should be performed once for each combination of values. These values are written in set notation, one per line. E.g.: loginID = {Valid loginID, invalid loginID, valid email, invalid email, empty}password = {valid, invalid, empty}
  • 122.
    Steps:Steps to carryout the test. See step formating rules below.  visit LoginPage  enter userID  enter password  click login  see the terms of use page  click agree radio button at page bottom  click submit button  see PersonalPage  verify that welcome message is correct username
  • 123.
    Format of teststeps Each step can be written very briefly using the following keywords: login [as ROLE-OR-USER]  Log into the system with a given user or a user of the given type. Usually only stated explicitly when the test case depends on the permissions of a particular role or involves a workflow between different users. visit LOCATION  Visit a page or screen. For web applications, LOCATION may be a hyperlink. The location should be a well-known starting point (e.g., the Login screen), drilling down to specific pages should be part of the test. enter FIELD-NAME [as VALUE] [in SCREEN-LOCATION]  Fill in a named form field. VALUE can be a literal value or the name of a variable defined in the "Test Data" section. The FIELD-NAME itself can be a variable name when the UI field for that value is clear from context, e.g., "enter password". enter FIELDS  Fill in all fields in a form when their values are clear from context or when their specific values are not important in this test case. click "LINK-LABEL" [in SCREEN-LOCATION]  Follow a labeled link or press a button. The screen location can be a predefined panel name or English phrase. Predefined panel names are based on GUI class names, master template names, or titles of boxes on the page.
  • 124.
    click BUTTON-NAME [inSCREEN-LOCATION]  Press a named button. This step should always be followed by a "see" step to check the results. see SCREEN-OR-PAGE  The tester should see the named GUI screen or web page. The general correctness of the page should be testable based on the feature description. verify CONDITION  The tester should see that the condition has been satisfied. This type of step usually follows a "see" step at the end of the test case. verify CONTENT [is VALUE]  The tester should see the named content on the current page, the correct values should be clear from the test data, or given explicitly. This type of step usually follows a "see" step at the end of the test case. perform TEST-CASE-NAME  This is like a subroutine call. The tester should perform all the steps of the named test case and then continue on to the next step of this test case. Every test case must include a verify step at the end so that the expected output is very clear. A test case can have multiple verify steps in the middle or at the end. Having multiple verify steps can be useful if you want a smaller number of long tests rather than a large number of short tests.
  • 125.
    GUIDELINE FOR TESTCASE WRITING 1. Test Cases need to be simple 2. Create Test Case with End User in mind 3. Avoid test case repetition 4. Do not Assume 5. Ensure 100% Coverage 6. Test Cases must be identifiable. 7. Peer Review. SOFTWARE TESTING 125
  • 126.
    SOFTWARE TESTING 126 TESTCASE TEMPLATE Form: TestCase Template --handouts --
  • 127.
    EXERCISE Write test casesfor  Login Screen  Email Sending  Pen  Printer SOFTWARE TESTING 127
  • 128.
    TEST PLANNING The TestPlan – defines the scope of the work to be performed The Test Procedure – a container document that holds all of the individual tests (test scripts) that are to be executed The Test Report – documents what occurred when the test scripts were run
  • 129.
    TEST PLAN Questions tobe answered:  How many tests are needed?  How long will it take to develop those tests?  How long will it take to execute those tests? Topics to be addressed:  Test estimation  Test development and informal validation  Validation readiness review and formal validation  Test completion criteria
  • 130.
    SOFTWARE TESTING 130 TESTPLANNING Shows what you are planning to do. Form: TestPlan Template --handouts --
  • 131.
    SOFTWARE TESTING 131 TESTREPORT Shows what happened when you ran your tests. Form: TestReport Template --handouts --
  • 132.
    TEST PROCEDURE Collection oftest scripts An integral part of each test script is the expected results The Test Procedure document should contain an unexecuted, clean copy of every test so that the tests may be more easily reused
  • 133.
    TEST REPORT Completed copyof each test script with evidence that it was executed (i.e., dated with the signature of the person who ran the test) Copy of each SPR (Software Problem Report) showing resolution List of open or unresolved SPRs Identification of SPRs found in each baseline along with total number of SPRs in each baseline Regression tests executed for each software baseline
  • 134.
    VALIDATION TEST PLAN IEEE– STANDARD 1012-1998 1. Overview a. Organization b. Tasks and Schedules c. Responsibilities d. Tools, Techniques, Methods 2. Processes a. Management b. Acquisition c. Supply d. Development e. Operation f. Maintenance
  • 135.
    VALIDATION TEST PLAN IEEE– STANDARD 1012-1998 (CONT’D) 3. Reporting Requirements 4. Administrative Requirements 5. Documentation Requirements 6. Resource Requirements 7. Completion Criteria
  • 136.
  • 137.
    TOOLS Defect Reporting andtracking  Bugzilla  Jira Test Management tools (Functional and UI testing)  Silenium  Winrunner Load/Performance Testing  Loadrunner
  • 138.
    COMMON SOFTWARE ERRORS Check AppendixA of “Testing Computer software” by Kaner, Falk and Nguyen, 2nd Edition
  • 139.
    COMMON SOFTWARE ERRORS User interfaceerrors Boundary related errors Calculation errors Initial and later state errors Control flow errors Race conditions Errors in handling or interpreting data Load conditions Source, version, and ID control Testing errors
  • 140.
    COMMON SOFTWARE ERRORS Boundary intime (Performance) Boundary in loops Boundary in memory (capacity/performance) Boundary within data structure Hardware related (capacity/Load/stress) Impossible number of parenthesis (expression error)
  • 141.
    DEFECT What is Defect? Adefect is a variance from a desired product attribute.
  • 142.
    DEFECT LOG 1. DefectID number 2. Descriptive defect name and type 3. Source of defect – test case or other source 4. Defect severity 5. Defect Priority 6. Defect status (e.g. New, open, fixed, closed, reopen, reject)
  • 143.
    7. Date andtime tracking for either the most recent status change, or for each change in the status. 8. Detailed description, including the steps necessary to reproduce the defect. 9. Component or program where defect was found (Product version) 10. Screen prints, logs, etc. that will aid the developer in resolution process. 11. Person assigned to research and/or corrects the defect.
  • 144.
    Severity Vs Priority Severity Factorthat shows how bad the defect is and the impact it has on the product Priority Based upon input from users regarding which defects are most important to them, and be fixed first.
  • 145.
    SEVERITY LEVELS Critical Major /High Average / Medium Minor / low Cosmetic defects
  • 146.
  • 147.
    TEST ESTIMATION Number oftest cases required is based on:  Testing all functions and features in the SRS  Including an appropriate number of ALAC (Act Like A Customer) tests including:  Do it wrong  Use wrong or illegal combination of inputs  Don’t do enough  Do nothing  Do too much  Achieving some test coverage goal  Achieving a software reliability goal
  • 148.
    CONSIDERATIONS IN TEST ESTIMATION TestComplexity – It is better to have many small tests that a few large ones. Different Platforms – Does testing need to be modified for different platforms, operating systems, etc. Automated or Manual Tests – Will automated tests be developed? Automated tests take more time to create but do not require human intervention to run.
  • 149.
    ESTIMATING TESTS REQUIRED SRS Reference Estimated Number of Tests Required Notes 4.1.13 2 positive and 1 negative test 4.1.2 2 2 automated tests 4.1.3 4 4 manual tests 4.1.4 5 1 boundary condition, 2 error conditions, 2 usability tests … Total 165
  • 150.
    ESTIMATED TEST DEVELOPMENT TIME EstimatedNumber of Tests: 165 Average Test Development Time: 3.5 (person-hours/test) Estimated Test Development Time: 577.5 (person-hours)
  • 151.
    ESTIMATED TEST EXECUTION TIME EstimatedNumber of Tests: 165 Average Test Execution Time: 1.5 (person-hours/test) Estimated Test Execution Time: 247.5 (person-hours) Estimated Regression Testing (50%): 123.75 (person-hours) Total Estimated Test Execution Time: 371.25 (person-hours)
  • 152.
    SOFTWARE TESTING 152 WHENTO STOP TESTING? How is the decision to stop testing made?  Coverage  Limits set by management  User acceptance  Contractual  Fault detection rate threshold  Meeting a standard
  • 153.
    SOFTWARE TESTING 153 REFERENCES Sommerville,Ian “Software Engineering 6th edition.” Pressman, Roger “Software Engineering, A Practioner’s Approach6th edition. ” Many More…

Editor's Notes