Hybridoma Technology ( Production , Purification , and Application )
Â
08 fse verification
1. B. Computer Sci. (SE) (Hons.)
CSEB233: Fundamentals
of Software Engineering
Software Verification and Validation
2. Objectives
īą
īą
īą
Discuss the fundamental concepts of software verification
and validation
Conduct software testing and determine when to stop
Describe several types of testing:
unit testing,
ī integration testing,
ī validation testing, and
ī system testing
ī
īą
īą
Produce standard for software test documentation
Use a set of techniques for the creation of test cases that
meet overall testing objectives and the testing strategies
4. Verification & Validation (1)
V & V must be applied at each framework activity in
the software process
īą Verification refers to the set of tasks that ensure that
software correctly implements a specific function
īą Validation refers to a different set of tasks that ensure
that the software that has been built is traceable to
customer requirements
īą Boehm states this another way:
īą
ī Verification:
"Are we building the product right?"
ī Validation: "Are we building the right product?â
5. Verification & Validation (2)
īą
V&V have two principal objectives:
ī Discover
defects in a system
ī Assess whether or not the system is useful and useable in
an operational situation
īą
V&V should establish confidence that the software is
fit for purpose
ī This
does NOT mean completely free of defects
ī Rather, it must be good enough for its intended use and the
type of use will determine the degree of confidence that is
needed
8. Software Testing
īą
īą
īą
The process of exercising
a program with the specific
intent of finding errors prior
to delivery to the end user
Must be planned carefully
to
avoid
wasting
development time and
resources, and conducted
systematically
What testing shows?
9. Who Tests the Software? (1)
īą
Developer
ī Understands
system
but, will test "gentlyâ
ī Driven by "delivery"
īą
Independent Tester
ī Must
learn about the
system,
ī Will attempt to break it
ī Driven by quality
10. Who Tests the Software? (2)
īą
Misconceptions:
ī The
developer should do no testing at all
ī Software should be âtossed over the wallâ to stranger
who will test it mercilessly
ī Testers are not involved with the project until it is time
for it to be tested
11. Who Tests the Software? (3)
īą
The developer and Independent Test Group (ITG)
must work together throughout the software project
to ensure that thorough tests will be conducted
ī An
ITG does not have the âconflict of interestâ that the
software developer might experience
ī While testing is conducted, the developer must be
available to correct errors that are uncovered
12. Testing Strategy (1)
Identifies steps to be undertaken; when these steps
are undertaken; how much effort; time; and resources
required.
īą Any testing strategy must incorporate:
īą
ī Test
planning
ī Test case design
ī Test execution
ī Resultant data collection and evaluation
īą
Should provide guidance for the practitioners and a
set of milestones for the manager
13. Testing Strategy (2)
īą
Characteristics of software testing strategies
proposed in the literature:
ī To
perform effective testing, you should conduct
effective technical reviews.
īĩ
By doing this, many errors will be eliminated before testing
commences.
ī Testing
begins at the component level and works
âoutwardâ toward the integration of the entire computerbased system
14. Testing Strategy (3)
ī Different
testing techniques are appropriate for different
software engineering approaches and at different
points in time.
ī Testing is conducted by the developer of the software
and (for large projects) an independent test group.
ī Testing and debugging are different activities, but
debugging must be accommodated in any testing
strategy.
15. Overall Software Testing Strategy
Maybe viewed in the context of the spiral
īą Begins by âtesting-in-the-smallâ and move toward
âtesting-in-the-largeâ
īą
16. Overall Software Testing Strategy
īą
Unit Testing
ī focuses
on each unit of the software
(e.g., component, module, class) as implemented in
source code
īą
Integration Testing
ī focuses
on issues associated with verification and
program construction as components begin interacting
with one another
17. Overall Software Testing Strategy
īą
Validation Testing
ī provides
assurance that the software validation criteria
(established during requirements analysis) meets all
functional, behavioral, and performance requirements
īą
System Testing
ī verifies
that all system elements mesh properly and
that overall system function and performance has been
achieved
18. When to Stop Testing?
īą
Testing is potentially endless
ī We
cannot test until all the defects are unearthed and
removed â which is impossible
īą
At some point, we have to stop testing and ship the
software
ī The
question is, When?
Realistically, testing is
budget, time and quality
īą It is driven by profit models
īą
a
trade-off
between
(Pan, 1999)
19. When to Stop Testing?
The pessimistic, and unfortunately most often used
approach is to stop testing whenever some, or any
of the allocated resources - time, budget, or test
cases - are exhausted
īą The optimistic stopping rule is to stop testing when
either reliability meets the requirement, or the
benefit from continuing testing cannot justify the
testing cost
īą
21. Unit Testing
īą
Focuses on assessing:
ī internal
processing logic and data structures within the
boundaries of a component (module).
ī proper information flow of module interfaces.
ī local data to ensure that integrity is maintained.
ī boundary conditions.
ī basis (independent) path.
ī all error handling paths.
īą
If resources are scarce to do comprehensive unit
testing, select critical or complex modules and unit test
these only
23. Integration Testing
After unit testing of individual modules, they are
combined together into a system
īą Question commonly asked once all modules have
been unit tested:
īą
ī âIf
they work individually, why do you doubt that theyâll work
when we put them together?â
īą
The problem is âputting them togetherâ â interfacing
ī Data
can be lost across an interface
ī Global data structures can present problems
ī Subfunctions, when combined, may not produce the
desired function
25. Bottom-up Integration
īą
An approach where the lowest level modules are
tested first, then used to facilitate the testing of
higher level modules
ī The
process is repeated until the module at the top of
the hierarchy is tested
ī Top level modules are the most important yet tested
last
īą
Is helpful only when all or most of the modules of
the same development level are ready
26. Bottom-up Integration
The steps:
īą Test D, E individually
īą Using a dummy program - âDriverâ
īą Low-level components are combined into clusters that perform a
specific software function.
ī
īą
īą
Test C such that it call D/E - If an
error occurs we know that the
problem is in C or in the
interface between C and D/E
The cluster is tested
Drivers are removed and clusters
are combined moving upward in
the program structure
27. Top-down Integration
The steps:
īą Main/top module used as a test driver and stubs are substitutes
for modules directly subordinate to it.
īą Subordinate stubs are replaced one at a time with real modules
(following the depth-first or breadth-first approach).
īą Tests are conducted as each module is integrated.
īą On completion of each set of tests and other stub is replaced with
a real module.
īą Regression testing may be used to ensure that new errors not
introduced.
īą The process continues from 2nd step until the entire program
structure is built
28. Top-down Integration
Example steps:
īą Test A individually (use stubs for
other modules)
īą Depending on the integration
approach selected, subordinate
stubs are replaced one at a time
with actual components
ī
īą
Test A such that it calls B (use
stub for other modules)
ī
īą
In a âdepth-firstâ structure:
If an error occurs we know that
the problem is in B or in the
interface between A and B
Replace stubs one at a
time, âdepth-firstâ and re-run tests
29. Regression Testing (1)
īą
Focuses on retesting after changes are made
ī Whenever
software is corrected, some aspects of the
software configuration is changed
īĩ
e.g., the program, its documentation, or the data that
support it
ī Regression
testing helps to ensure that changes - due
to testing or for other reasons - do not introduce
unintended behavior or additional errors
30. Regression Testing (2)
In traditional regression testing, we reuse the same
tests
īą In risk-oriented regression testing, we test the
same areas as before, but we use different
(increasingly complex) tests
īą Regression testing may be conducted manually, by
re-executing a subset of all test cases or using
automated capture/playback tools
īą
31. Smoke Testing (1)
A common approach for creating âdaily buildsâ for
product software
īą Software components that have been translated into
code are integrated into a âbuildâ
īą A build includes all data files, libraries, reusable
modules, and engineered components that are
required to implement one or more product functions
īą A series of tests is designed to expose errors that will
keep the build from properly performing its function
īą
32. Smoke Testing (2)
The intent should be to uncover âshow stopperâ
errors that have the highest likelihood of throwing
the software project behind schedule
īą The build is integrated with other builds and the
entire product (in its current form) is smoke tested
daily
īą The integration approach may be top down or
bottom up
īą
33. Validation Testing (1)
Focuses on uncovering errors at the software
requirements level.
īą SRS might contain a âValidation Criteriaâ that forms
the basis for a validation-testing approach
īą
34. Validation Testing (2)
īą
Validation-Test Criteria:
ī all
functional requirements are satisfied
ī all behavior characteristics are achieved
ī all content is accurate and properly presented
ī all
performance
requirements
attained, documentation is correct, and
ī usability and other requirements are met
are
35. Validation Testing (3)
īą
An important element of the validation process is a
configuration review/audit
ī Ensure
that all elements of the software configuration
have been properly developed, are cataloged, and
have the necessary detail to strengthen the support
activities.
36. Validation Testing (4)
īą
A series of acceptance tests are conducted to enable
the customer to validate all requirements
ī To
make sure the software works correctly for intended
user in his or her normal work environment
ī Alpha test
īĩ
Version of the complete software is tested by customer under the
supervision of the developer at the developerâs site
ī Beta
īĩ
test
Version of the complete software is tested by customer at his or
her own site without the developer being present
37. System Testing (1)
A series of different tests to verify that system
elements have been properly integrated and
perform allocated functions.
īą Types of system tests:
īą
ī Recovery
Testing
ī Security Testing
ī Stress Testing
ī Performance Testing
ī Deployment Testing
38. System Testing (2)
īą
Recovery Testing
ī forces
the software to fail in a variety of ways and
verifies that recovery is properly performed
īą
Security Testing
ī verifies
that protection mechanisms built into a system
will, in fact, protect it from improper penetration
īą
Stress Testing
ī executes
a system in a manner that demands
resources in abnormal quantity, frequency, or volume
39. System Testing (3)
īą
Performance Testing
ī test
the run-time performance of software within the
context of an integrated system
īą
Deployment Testing
ī examines
all installation procedures and specialized
installation software that will be used by customers
ī all documentation that will be used to introduce the
software to end users
41. Software Test Documentation (1)
IEEE 829 2008 Standard for Software Test
Documentation
īą IEEE standard that
specifies the form of a
set of documents for
use in eight defined
stages of software
testing
īą
īą
The documents are:
ī Test
Plan
ī Test Design Specification
ī Test Case Specification
ī Test Procedure
Specification
ī Test Item Transmittal
Report
ī Test Log
ī Test Incident Report
ī Test Summary Report
42. Software Test Documentation (2)
īą
Test Plan - A management planning document that
shows:
ī How
īĩ
the testing will be done
including System Under Test (SUT) configurations.
ī Who
will do it
ī What will be tested
ī How long it will take - may vary, depending upon resource
availability
ī What the test coverage will be, i.e. what quality level is
required
43. Software Test Documentation (3)
īą
Test Design Specification:
ī detailing
test conditions and the expected results as
well as test pass criteria.
īą
Test Procedure Specification:
ī detailing
how to run each test, including any set-up
preconditions and the steps that need to be followed
44. Software Test Documentation (4)
īą
Test Item Transmittal Report:
ī
īą
reporting on when tested software components have
progressed from one stage of testing to the next
Test Log:
ī recording
which tests cases were run, who ran them, in
what order, and whether each test passed or failed
īą
Test Incident Report:
ī detailing,
for any test that failed, the actual versus expected
result, and other information intended to throw light on why
a test has failed.
45. Software Test Documentation (5)
īą
Test Summary Report:
īA
management report providing any important information
uncovered by the tests accomplished, and including
assessments of the quality of the testing effort, the quality
of the software system under test, and statistics derived
from Incident Reports
ī The report also records what testing was done and how
long it took, in order to improve any future test planning
ī This final document is used to indicate whether the
software system under test is fit for purpose according to
whether or not it has met acceptance criteria defined by the
project stakeholders
47. Test-case Design (1)
Focuses on a set of techniques for the creation of test
cases that meet overall testing objectives and the
testing strategies
īą These techniques provide a systematic guidance for
designing tests that
īą
ī Exercise
the internal logic and interfaces of every software
component/module
ī Exercise the input and output domains of the program to
uncover errors in program function, behaviour, and
performance
48. Test-case Design (2)
âĸ For conventional application, software is tested from two
perspectives:
īą
White-boxâ testing
ī
ī
ī
Focus on the program control
structure (internal program logic)
Test cases are derived to ensure
that all statements in the
program have been executed at
least once during testing and all
logical conditions have been
exercised
Performed early in the testing
process
īą
âBlack-boxâ testing
Examines some
fundamental aspect of a
system with little regard for
the internal logical
structure of the software
ī Performed during later
stages of testing
ī
49. White-box Testing (1)
īą
Using white-box testing method, you may derive testcases that:
ī Guarantee
that al independent paths within a module have
been exercised at least once
ī Exercise all logical decisions on their true and false sides
ī Execute all loops at their boundaries and within their
operational bounds
ī Exercise internal data structures to ensure their validity
īą
Example method: basis path testing
50. White-box Testing (2)
īą
Basis path testing:
ī Test
cases derived to
exercise the basis set
are guaranteed to execute every statement in
the program at least
once during testing
51. Deriving Test Cases (1)
īą
Steps to derive the test cases by applying the basis
path testing method:
ī Using
the design or code, draw a corresponding flow graph.
The flow graph depicts logical control flow using the notation
illustrated in next slide.
īĩ Refer Figure 18.2 in page 486 - comparison between a flowchart
and a flow graph
īĩ
ī Calculate
the Cyclometic Complexity V(G) of the flow graph
ī Determine a basis set of independent paths
ī Prepare test cases that will force execution of each path in
the basis set
52. Deriving Test Cases (2)
īą
Flow graph notation:
UNTIL
Sequence
IF
WHILE
CASE
53. Drawing Flow Graph: Example
void foo (float y, float a *, int n)
{
float x = sin (y) ;
if (x > 0.01) 1
z = tan (x) ; 2
else
z = cos (x) ; 3
for (int i = 0 ; i < x ; + + i) 5
{
a[i] = a[i] * z ; 6
Cout < < a [i]; 7
}
} 8
Predicate
nodes
1
2
Predicate
nodes
R1
4
5
3
R3
R2
6
8
7
54. Deriving Test Cases (3)
īą
īą
The arrows on the flow
graph, called edges or
links, represent flow of
control and are analogous
to flowchart arrows
Area bounded by edges
and nodes are called
regions
ī
When counting regions, we
include the area outside
the graph as region
56. Deriving Test Cases: Example
Step 2: Calculate the Cyclomatic complexity, V(G)
īą Cyclomatic complexity can be used to count the minimum
number of independent paths.
īą A number of industry studies have indicated that the higher
V(G), the higher the probability or errors.
īą The SEI provides the following basic risk assessment based
on the value of code:
Cyclomatic Complexity Risk Evaluation
1 to 10
A simple program, without very much risk
11 to 20
A more complex program, moderate risk
21 to 50
A complex, high risk program
> 50
An un-testable program (very high risk)
57. Deriving Test Cases: Example
īą
Ways to calculate V(G):
ī V(G)
= the number of regions of the flow graph.
ī V(G) = E â N + 2 ( Where âEâ are edges & âNâ are nodes)
ī V(G) = P + 1 (Where P is the predicate nodes in the flow
graph, each node that contain a condition)
īą
Example:
ī V(G)
= Number of regions = 4
ī V(G) = E â N + 2 = 16 â 14 + 2 = 4
ī V(G) = P + 1 = 3 + 1 = 4
58. Deriving Test Cases: Example 1
īą
īą
īą
īą
īą
īą
īą
īą
Step 3: Determine a basis set of independent paths
Path 1: 1, 2, 3, 4, 5, 6, 7, 8, 12
Path 2: 1, 2, 3, 12
Path 3: 1, 2, 3, 4, 5, 9, 10, 3, âĻ
Path 4: 1, 2, 3, 4, 5, 9, 11, 3, âĻ
Step 4: Prepare test cases
Test cases should be derived so that all of these paths are
executed
A dynamic program analyser may be used to check that
paths have been executed
59. Summary (1)
Software testing plays an extremely important role
in V&V, but many other SQA activities are also
necessary
īą Testing must be planned carefully to avoid wasting
development time and resources, and conducted
systematically
īą The developer and ITG must work together
throughout the software project to ensure that
thorough tests will be conducted
īą
60. Summary (2)
The software testing strategy is to begins by âtestingin-the-smallâ and move toward âtesting-in-the-largeâ
īą The IEEE 829.2009 standard specifies a set of
documents for use in eight defined stages of software
testing
īą The âwhite-boxâ and âblack-boxâ techniques provide a
systematic guidance for designing test cases
īą We need to know when is the right time to stop testing
īą
61. THE END
Copyright Š 2013
Mohd. Sharifuddin Ahmad, PhD
College of Information Technology