Department of Computer Science and Engineering
Arizona State University
Tempe, AZ 85287
Fall semester 2003 CSE565 1
Software Testing – Key Terms
- Are we building the product right ?
- Are we building the right product ?
- Probability that a given software program
performs as expected for a period of time without
- Examination of the behavior of a software program
over a set of sample data.
Fall semester 2003 CSE565 2
Some Good Books on Testing
• Myers, The Art of Software Testing, 1979.
• B. Beizer, most of his books are good. His recent book on
black-box testing is good.
• M. Ould and C. Unwin, Testing in Software Development,
Cambridge University Press, 1987.
• R. C. Wilson, Software Rx: Secrets of Engineering
Quality Software, Prentice Hall, 1997.
• S. Kirani and W. T. Tsai, “Testing Object-Oriented
Software”, TR, University of Minnesota, 1994.
Fall semester 2003 CSE565 3
Errors, Bugs and Failures
• Error: A human mistake.
• Faults: Bugs which appear in a give program
• Failure: Running an input sequence that causes a bug,
and/or the produces an output that is different from the
• One error can result in multiple bugs.
• Multiple errors can result in one bug.
• One bug can have one or more failures.
• Multiple bugs can lead to one or multiple failures.
Fall semester 2003 CSE565 4
Why Do We Need Software
• No One can write Perfect Code all the time.
• Errors in Commercial Products cause Loss in
• Failures in High Availability and Safety Critical
Systems can cause serious irreversible damages.
• Misunderstanding user requirements can lead to
development of perfectly good wrong products.
Fall semester 2003 CSE565 5
Objectives of Testing
• Testing does not mean “Finding Bugs” ONLY.
• The Objectives of Software Testing are
- Find Errors.
- Verify Requirements.
- Make Prediction about the product(s).
Of the above mentioned factors the last one is pretty
difficult. Why ? Because it depends on several
external factors in addition to the standard factors.
Fall semester 2003 CSE565 6
Some Testing Criteria
• Robustness: Does the software component deteriorate
gracefully as it approaches the limits specified in the
• Completeness: Does the software solving the problem
• Consistency: Does the software component perform
consistently, i.e in the sense does it produce the same
output each time for the same input(s).
• Usability: Is the software easy to use.
• Testability: Is the software easily testable.
• Safety: If the software component is safety critical, is it
safe to use.
Fall semester 2003 CSE565 7
Why is Testing Difficult ?
• Generate Test Inputs
– How many inputs to generate ?
– Provide all the setup, environment, databases similar to
what the client has.
• Generate expected outputs
– Generally testing is done on a prototype. Will the actual
system behave exactly as the prototype.
• Compare the test output with the expected
Fall semester 2003 CSE565 8
Cost of Testing
• Cost of test input generation (positive)
• Cost of expected output generation (positive)
• Cost of running the test
• Cost of comparing test results and their expected outputs
• Cost of finding bugs (negative cost)
• Cost of missing bugs (positive and can be large)
• Cost of test management such as bug reporting, bug
tracking, scheduling (positive)
• Most research papers do not consider all the factors.
Fall semester 2003 CSE565 9
Cost of Software Testing contd..
• Usually high, can be as high as 70% to 90% of the
cost, especially for those projects which have a
poor design and development phase.
• Cost of software testing can be reduced by
automation (almost all the activities of testing can
be automated, e.g., test input generation, expected
output generation, test case reuse, and test running
can be automated but many of these techniques are
still highly manual).
Fall semester 2003 CSE565 10
Levels of Testing
• Unit/module/component test
– Test individual units separately.
– Deals with finding logic errors, syntax errors etc.
– Verify that component adheres to its specification.
• Integration test
– Find interface defects.
– Verify component interactions to make sure they are
Fall semester 2003 CSE565 11
Levels of Testing contd..
• System test
– Verify the overall system functionality.
• Alpha testing
– Testing with select customers within the organization.
• Beta testing
– Testing with select customers external to the
Fall semester 2003 CSE565 12
Attitude(s) That Make A Good
• Customer Perspective
• Testing intended functionalities.
• Testing unintended functionalities.
Fall semester 2003 CSE565 13
Attitude Of a Good Tester
- Independent from the developer. Why ?
Developers tend to be biased towards their mistakes.
• Customer Perspective
- Must be able to think from a customers perspective.
Why ? Ultimately the customer is the one going to use the
product. They bring in the revenue, so a good tester
must be able to think from a customers perspective.
Fall semester 2003 CSE565 14
Attitude Of a Good Tester
• Testing Intended Functionality
- This is one of the basic purpose of testing. A good tester is one
who tests each and every intended functionality to make
sure that the software is exactly what the client wanted.
• Testing Unintended Functionality
- Sometimes called break-it testing (Dirty Testing). In this
process the tester intentionally tries to make the code fail.
Helps in detecting some special cases where the code may
Fall semester 2003 CSE565 15
Formal Technical Reviews
– To uncover errors in function, logic or implementation
– To verify that the software under review meets its req.
– To ensure that the software has been represented
according to predefined standards.
– To achieve software that is developed in a uniform
– To make projects more manageable.
Fall semester 2003 CSE565 16
Formal Technical Reviews(cont’d)
• The FTR is actually a class of reviews:
– Includes walkthroughs, inspections, round-
robin reviews, and other small group technical
– Goal is to involve all the people involved in
design, development and testing to understand
the state of a software product.
– To be effective FTR’s must be properly
planned, controlled, and attended.
Fall semester 2003 CSE565 17
• Meeting activities
• Following up
Fall semester 2003 CSE565 18
What is Inspection ?
• Formal statistical process control method
for evaluating work products.
• What do these terms mean ?
• Formal: follow a standard set of procedures and maintain
a serious ambience during the inspection process.
• Statistical: collate date and use use standard metrics.
• Process Control Method: the decision that is made using
the available metrics and statistics.
Fall semester 2003 CSE565 19
• A test case design method that uses the control
structure of the procedural design to derive test
– Guarantee that all independent paths within a module
have been exercised at least once.
– Exercise all logical decision on their true and false
– Execute all loops at their boundaries and within their
– Exercise internal data structures to assure their
Fall semester 2003 CSE565 20
• Focuses on the functional requirements of the
software. It is not an alternative approach to
white-box testing. Instead it acts as a
complement to the WB Testing technique.
– Runtime errors (Missing function definitions etc).
– Interface errors.
– Performance errors, and
– Initialization and termination errors.
Fall semester 2003 CSE565 21
Basis Path Testing
• Basis Path Testing is a technique that fulfils the
requirements of Path Testing and also Independent
Paths that can be used to construct an arbitrary
path through a computer program.
• What is a Basis Path ?
It is a unique path through the software with no
loops, - all possible paths are a linear
combination of them.
Fall semester 2003 CSE565 22
McCabe’s Basis Path Testing
• Draw a control flow graph.
• Calculate Cyclomatic Complexity.
• Choose a Basis Set of Paths.
• Generate Test Cases to test each of the
paths selected above.
Fall semester 2003 CSE565 23
V(G) = e+2*p – n.
e = no. of edges.
n = no. of nodes.
p = no. of connected components.
The higher the McCabe Number, the higher is
the complexity of the software and the more
error prone it becomes.
Fall semester 2003 CSE565 24
Data Flow Testing
• Tests the use of variables along different paths of
• Most common types of errors occur because of
initialization before declaration or usage before
• Global variables cause more problems than local
• Very Expensive to perform and is used mainly to
test High Performance Applications and High
Fall semester 2003 CSE565 25
• Functional Testing Criteria.
• Applicable when the inputs are independent, that is there
are no input combinations.
• How is EP done ?
• Divide the input space into finite partitions.
• For each partition defined, create a set of test cases.
Develop test cases covering as many partitions as possible.
• For each invalid partition, develop additional test cases.
• Use Coverage Matrix to keep track of the test cases.
Fall semester 2003 CSE565 26
Boundary Value Analysis
• An important technique to detect errors occurring at component
• Several errors tend to occur when components interact.
• Programmers tend to look how to implement their code correctly.
Generally overlook how to handle exceptions that MAY occur.
• As an example consider an API that tests if a point lies in a rectangle
The CRect class has an API bool PtInRect(POINT p) that accepts a
POINT type input parameter and returns a BOOL depending on the
position of the point w.r.p to the rectangle.
Fall semester 2003 CSE565 27
Boundary Value Testing
• From a programmers point of view, the implementation is
straightforward. Check if the point is within the co -
ordinates of the rectangle and return an appropriate value.
• Some Special cases:
- Point is “ON” the rectangle.
- Point is one of the vertices itself. (Spl case of above).
• What should happen in these cases. Have these cases been
taken care of by the developer ? BT helps solve
some problems of these types.
Fall semester 2003 CSE565 28
• Select a random input from a given domain
– can be either input or output domain, but most
of the time, input domain is used.
• Paper by Duran and Ntafos in IEEE
Transaction on Software Engineering on
random testing in 1980’s. Several topics about
random testing were discussed.
Fall semester 2003 CSE565 29
Assumption Made in the Paper
• Finding a single failure is equivalent to finding a fault.
• Domains of faults do not interact with each other.
• Each domain contains at most one fault.
• Failure rate is assumed to be uniform.
• Pr & Pp: probabilities of finding one or more faults using
random and partition testing respectively.
• Er & Ep: expected numbers of faults.
Fall semester 2003 CSE565 30
• Pr/Pp = 90%
• Er/Ep = 90%
• The authors also performed some (close to ten) real
experiment on random testing, and found random testing
was almost always as effective as partition testing.
• The authors concluded that the costs incurred to compare
the results of running random testing are similar to those of
running partition testing. However the cost of generating
random test inputs is low when compared to test case
generation in partition testing, thus we should be serious in
Fall semester 2003 CSE565 31