2. What they SayâŚ
⢠âTesting is the process of executing a program with
the intention of finding errors.â â Myers
⢠âTesting can show the presence of bugs but never
their absence.â - Dijkstra
3. Definition
⢠Testing is the process of exercising a program with
the specific intent of finding errors prior to delivery
to the end user.
4. 4
Who Tests the
Software?
developerdeveloper independent testerindependent tester
Understands the systemUnderstands the system
but, will test "gently"but, will test "gently"
and, is driven by "and, is driven by "deliverydelivery""
Must learn about the system,Must learn about the system,
but, will attempt tobut, will attempt to breakbreak itit
and, is driven byand, is driven by qualityquality
5. Characteristics of Testable Software
⢠Operable
â The better it works (i.e., better quality), the easier it is to test
⢠Observable
â Incorrect output is easily identified; internal errors are automatically detected
⢠Controllable
â The states and variables of the software can be controlled directly by the tester
⢠Decomposable
â The software is built from independent modules that can be tested independently
⢠Simple
â The program should exhibit functional, structural, and code simplicity
⢠Stable
â Changes to the software during testing are infrequent and do not invalidate
existing tests
⢠Understandable
â The architectural design is well understood; documentation is available and
organized
6. Test Characteristics
⢠A good test has a high probability of finding an error
â The tester must understand the software and how it might fail
⢠A good test is not redundant
â Testing time is limited; one test should not serve the same purpose as another
test
⢠A good test should be âbest of breedâ
â Tests that have the highest likelihood of uncovering a whole class of errors
should be used
⢠A good test should be neither too simple nor too complex
â Each test should be executed separately; combining a series of tests could
cause side effects and mask certain errors
7. A strategy for software testing
⢠integrates the design of software test cases into a well-planned
series of steps that result in successful development of the
software
⢠The strategy provides a road map that describes the steps to be
taken, when, and how much effort, time, and resources will be
required
⢠The strategy incorporates test planning, test case design, test
execution, and test result collection and evaluation
⢠The strategy provides guidance for the practitioner and a set of
milestones for the manager
⢠Because of time pressures, progress must be measurable and
problems must surface as early as possible
9. General Characteristics of
Strategic Testing
⢠To perform effective testing, a software team should conduct
effective formal technical reviews
⢠Testing begins at the component level and work outward toward
the integration of the entire computer-based system
⢠Different testing techniques are appropriate at different points in
time
⢠Testing is conducted by the developer of the software and (for
large projects) by an independent test group
⢠Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy
10. A Strategy for Testing Conventional Software
Code
Design
Requirements
System Engineering
Unit Testing
Integration Testing
Validation Testing
System Testing
Abstractto
concrete
Narrow
to
Broaderscope
13. Unit Testing
⢠Unit testing is a software development process in which the
smallest testable parts of an application, called units, are
individually and independently scrutinized for proper operation.
Unit testing is often automated but it can also be done
manually.
⢠Algorithms and logic
⢠Data structures (global and local)
⢠Interfaces
⢠Independent paths
⢠Boundary conditions
⢠Error handling
14. Unit Testing
modulemodule
to beto be
testedtested
test casestest cases
resultsresults
softwaresoftware
engineerengineer
interfaceinterface
local data structureslocal data structures
boundary conditionsboundary conditions
independent pathsindependent paths
error handling pathserror handling paths
16. Integration Testing
⢠Integration testing (sometimes called integration and testing, abbreviated I&T)
is the phase in software testing in which individual software modules are combined
and tested as a group. It occurs after unit testing and before validation testing.
⢠Integration: Combining 2 or more software units
â often a subset of the overall project
17. Why Integration Testing Is Necessary
⢠One module can have an adverse effect on another
⢠Sub functions, when combined, may not produce the desired
major function
⢠Individually acceptable imprecision in calculations may be
magnified to unacceptable levels
⢠Interfacing errors not detected in unit testing may appear
⢠Timing problems (in real-time systems) are not detectable by
unit testing
⢠Resource contention problems are not detectable by unit testing
19. Phased integration
⢠phased ("big-bang") integration:
â design, code, test, debug each class/unit/subsystem
separately
â combine them all
â pray
20. Top-down integration
⢠top-down integration:
Start with outer UI layers and work inward
â must write (lots of) stub lower layers for UI to interact with
â allows postponing tough design/debugging decisions (bad?)
21. Problems with Top-Down Integration
⢠Many times, calculations are performed in the modules at the
bottom of the hierarchy
⢠Stubs typically do not pass data up to the higher modules
⢠Delaying testing until lower-level modules are ready usually
results in integrating many modules at the same time rather
than one at a time
⢠Developing stubs that can pass data up is almost as much
work as developing the actual module
22. Bottom-up integration
⢠bottom-up integration:
Start with low-level data/logic layers and work outward
â must write test drivers to run these layers
â won't discover high-level / UI design flaws until late
23. Problems with Bottom-Up Integration
⢠The whole program does not exist until the last module is
integrated
⢠Timing and resource contention problems are not found
until late in the process
24. Stubs
⢠stub: A controllable replacement for an existing software unit to
which your code under test has a dependency.
â useful for simulating difficult-to-control elements:
⢠network / internet
⢠database
⢠time/date-sensitive code
⢠files
⢠threads
⢠memory
â also useful when dealing with brittle legacy code/systems
25. "Sandwich" integration
⢠"sandwich" integration:
Connect top-level UI with crucial bottom-level classes
â add middle layers later as needed
â more practical than top-down or bottom-up?
27. System Testing
⢠System testing of software or hardware is testing
conducted on a complete, integrated system to evaluate the
system's compliance with its specified requirements.
⢠System testing falls within the scope of black box testing,
and as such, should require no knowledge of the inner design
of the code or logic.
28. Principles of System Testing
System Testing Process
⢠Function testing: does the integrated system perform as
promised by the requirements specification?
⢠Performance testing: are the non-functional requirements
met?
⢠Acceptance testing: is the system what the customer
expects?
⢠Installation testing: does the system run at the customer
site(s)?
29. Performance Tests
Purpose and Roles
⢠Used to examine
â the calculation
â the speed of response
â the accuracy of the result
â the accessibility of the data
⢠Designed and administrated by the test team
31. Reliability, Availability, and Maintainability
Definition
â˘Software reliability: operating without
failure under given condition for a
given time interval
â˘Software availability: operating
successfully according to specification
at a given point in time
â˘Software maintainability: for a given
condition of use, a maintenance
activity can be carried out within stated
time interval, procedures and
resources
Different Level of Failure Severity
â˘Catastrophic: causes death or
system loss
â˘Critical: causes severe injury or
major system damage
â˘Marginal: causes minor injury or
minor system damage
â˘Minor: causes no injury or
system damage
32. Acceptance Tests
⢠Enable the customers and users to determine if the built system
meets their needs and expectations
⢠Written, conducted and evaluated by the customers
⢠Pilot test: install on experimental basis
⢠Alpha test: in-house test
⢠Beta test: customer pilot
⢠Parallel testing: new system operates in parallel with old
system
33. Installation Testing
⢠Before the testing
â Configure the system
â Attach proper number and kind of devices
â Establish communication with other system
⢠The testing
â Regression tests: to verify that the system has been
installed properly and works
36. White Box testing
White box testing is testing where we use the info available
from the code of the component to generate tests.
This info is usually used to achieve coverage in one way or
another â e.g.
⢠Code coverage
⢠Path coverage
⢠Decision coverage
Debugging will always be white-box testing
39. Black Box testing
Black box testing is also called functional testing. The
main ideas are simple:
1. Define initial component state, input and expected
output for the test.
2. Set the component in the required state.
3. Give the defined input
4. Observe the output and compare to the expected
output.
40. Info for Black Box testing
That we do not have access to the code does not mean
that one test is just as good as the other one. We
should consider the following info:
⢠Algorithm understanding
⢠Parts of the solutions that are difficult to implement
⢠Special â often seldom occurring â cases.
41. Black Box vs. White Box testing
We can contrast the two methods as follows:
⢠White Box testing
â Understanding the implemented code.
â Checking the implementation
â Debugging
⢠Black Box testing
â Understanding the algorithm used.
â Checking the solution â functional testing
42. Criteria Black Box Testing White Box Testing
Definition
Black Box Testing is a software testing
method in which the internal structure/
design/ implementation of the item being
tested is NOT known to the tester
White Box Testing is a software testing
method in which the internal structure/
design/ implementation of the item being
tested is known to the tester.
Levels Applicable To
Mainly applicable to higher levels of testing:
Acceptance Testing
System Testing
Mainly applicable to lower levels of
testing:Unit Testing
Integration Testing
Responsibility Generally, independent Software Testers Generally, Software Developers
Programming Knowledge Not Required Required
Implementation Knowledge Not Required Required
Basis for Test Cases Requirement Specifications Detail Design
43. Basis Path Testing
⢠White-box technique usually based on the program flow graph
⢠The cyclomatic complexity of the program computed from its flow graph using the
formula V(G) = E â N + 2 or by counting the conditional statements in the PDL
representation and adding 1
⢠Determine the basis set of linearly independent paths (the cardinality of this set is the
program cyclomatic complexity)
⢠Prepare test cases that will force the execution of each path in the basis set.
44. Control Structure Testing
⢠White-box techniques focusing on control structures present in the
software
⢠Condition testing (e.g. branch testing)
â focuses on testing each decision statement in a software module
â it is important to ensure coverage of all logical combinations of data that
may be processed by the module (a truth table may be helpful)
⢠Data flow testing
â selects test paths based according to the locations of variable definitions
and uses in the program (e.g. definition use chains)
⢠Loop testing
â focuses on the validity of the program loop constructs (i.e. while, for, go to)
â involves checking to ensure loops start and stop when they are supposed to
(unstructured loops should be redesigned whenever possible)
46. Product Use Testing
Product use under normal operating conditions.
Some terms:
â Alpha testing: done in-house.
â Beta testing: done at the customer site.
Typical goals of beta testing: to determine if the product works
and is free of âbugs.â
49. Verification and Validation
⢠Verification
â Are you building the product right?
â Software must conform to its specification
⢠Validation
â Are you building the right product?
â Software should do what the user really requires
50. Verification and Validation Process
⢠Must applied at each stage of the software
development process to be effective
⢠Objectives
â Discovery of system defects
â Assessment of system usability in an operational
situation
52. Performance Testing
⢠Performance testing is the process of determining the speed or
effectiveness of a computer, network, software program or device.
⢠Before going into the details, we should understand the factors that
governs Performance testing:
ďź Throughput
ďź Response Time
ďź Tuning
ďź Benchmarking
53. Stress testing
⢠Exercises the system beyond its maximum design load.
Stressing the system often causes defects to
come to light
⢠Stressing the system test failure behaviour.. Systems should
not fail catastrophically. Stress testing checks for
unacceptable loss of service or data
⢠Particularly relevant to distributed systems
which can exhibit severe degradation as a
network becomes overloaded
54. Smoke Testing
⢠Taken from the world of hardware
â Power is applied and a technician checks for sparks, smoke, or
other dramatic signs of fundamental failure
⢠Designed as a pacing mechanism for time-critical projects
â Allows the software team to assess its project on a frequent basis
⢠Includes the following activities
â The software is compiled and linked into a build
â A series of breadth tests is designed to expose errors that will keep
the build from properly performing its function
⢠The goal is to uncover âshow stopperâ errors that have the highest
likelihood of throwing the software project behind schedule
â The build is integrated with other builds and the entire product is
smoke tested daily
⢠Daily testing gives managers and practitioners a realistic assessment of
the progress of the integration testing
â After a smoke test is completed, detailed test scripts are executed
57. Debugging Process
⢠Debugging occurs as a consequence of successful testing
⢠It is still very much an art rather than a science
⢠Good debugging ability may be an innate human trait
⢠Large variances in debugging ability exist
⢠The debugging process begins with the execution of a test case
⢠Results are assessed and the difference between expected and actual
performance is encountered
⢠This difference is a symptom of an underlying cause that lies hidden
⢠The debugging process attempts to match symptom with cause, thereby
leading to error correction
58. Why is Debugging so Difficult?
⢠The symptom and the cause may be geographically remote
⢠The symptom may disappear (temporarily) when another error
is corrected
⢠The symptom may actually be caused by nonerrors (e.g., round-
off accuracies)
⢠The symptom may be caused by human error that is not easily
traced
59. Why is Debugging so Difficult?
(continued)
⢠The symptom may be a result of timing problems, rather than
processing problems
⢠It may be difficult to accurately reproduce input conditions, such as
asynchronous real-time information
⢠The symptom may be intermittent such as in embedded systems
involving both hardware and software
⢠The symptom may be due to causes that are distributed across a
number of tasks running on different processes
60. Debugging Strategies
⢠Objective of debugging is to find and correct the cause of a
software error
⢠Bugs are found by a combination of systematic evaluation,
intuition, and luck
⢠Debugging methods and tools are not a substitute for careful
evaluation based on a complete design model and clear source
code
⢠There are three main debugging strategies
â Brute force
â Backtracking
â Cause elimination
61. Strategy #1: Brute Force
⢠Most commonly used and least efficient method
⢠Used when all else fails
⢠Involves the use of memory dumps, run-time traces, and output
statements
⢠Leads many times to wasted effort and time
62. Strategy #2: Backtracking
⢠Can be used successfully in small programs
⢠The method starts at the location where a symptom has been
uncovered
⢠The source code is then traced backward (manually) until the
location of the cause is found
⢠In large programs, the number of potential backward paths may
become unmanageably large
63. Strategy #3: Cause Elimination
⢠Involves the use of induction or deduction and introduces the
concept of binary partitioning
â Induction (specific to general): Prove that a specific starting value is
true; then prove the general case is true
â Deduction (general to specific): Show that a specific conclusion follows
from a set of general premises
⢠Data related to the error occurrence are organized to isolate
potential causes
⢠A cause hypothesis is devised, and the aforementioned data are
used to prove or disprove the hypothesis
⢠Alternatively, a list of all possible causes is developed, and tests are
conducted to eliminate each cause
⢠If initial tests indicate that a particular cause hypothesis shows
promise, data are refined in an attempt to isolate the bug
64. Three Questions to ask Before Correcting the Error
⢠Is the cause of the bug reproduced in another part of the program?
â Similar errors may be occurring in other parts of the program
⢠What next bug might be introduced by the fix that Iâm about to
make?
â The source code (and even the design) should be studied to assess
the coupling of logic and data structures related to the fix
⢠What could we have done to prevent this bug in the first place?
â This is the first step toward software quality assurance
â By correcting the process as well as the product, the bug will be
removed from the current program and may be eliminated from all
future programs