 Testing is the process of exercising a program with the specific
intent of finding errors prior to delivery to the end user.
 Testing is intended to show that a program does what it is
intended to do and to discover program defects before it is put
into use.
 When you test software, you execute a program using artificial
data.
 You check the results of the test run for errors, anomalies or
information about the program’s non-functional attributes.
 Can reveal the presence of errors NOT their absence.
 Testing is part of a more general verification and validation
process, which also includes static validation techniques.
 To uncover errors.
 To demonstrate to the developer and the customer that
the software meets its requirements.
 For custom software, this means that there should be at least
one test for every requirement in the requirements document.
For generic software products, it means that there should be
tests for all of the system features, plus combinations of these
features, that will be incorporated in the product release.
 To discover situations in which the behavior of the
software is incorrect, undesirable or does not conform
to its specification.
 Defect testing is concerned with rooting out undesirable system
behavior such as system crashes, unwanted interactions with
other systems, incorrect computations and data corruption.
 Operability—it operates cleanly
 Observability—the results of each test case are
readily observed
 Controlability—the degree to which testing can be
automated and optimized
 Decomposability—testing can be targeted
 Simplicity—reduce complex architecture and logic
to simplify tests
 Stability—few changes are requested during
testing
 Understandability—of the design
errors
requirements conformance
performance
an indication
of quality
developer independent tester
Understands the system
but, will test "gently"
and, is driven by "delivery"
Must learn about the system,
but, will attempt to break it
and, is driven by quality
 These are the basic principles that guide software
testing:
 All tests should be traceable to customer
requirements
 Tests should be planned long before testing begins
 The Pareto principle applies to software testing
 Pareto principle implies that 80% of all errors
uncovered during testing will likely be traceable to 20%
of all program components. The problem of course, is
to isolate these suspect components and to thoroughly
test them.
 Testing should begin ‘in the small’ and progress
toward testing ‘in the large’.
 Exhaustive testing is not possible
 To be more effective, testing should be conducted by
an independent third party
Methods
Strategies
white-box
methods
black-box
methods
 White-Box testing is a test case design method that uses the
control structure of the procedural design to derive test
cases.
 Using white-box testing methods, the software engineer can
derive test cases that
 Guarantee that all independent paths within a module
have been exercised at least once
 Exercise all logical decisions on their operational bounds
 Execute all loops at their boundaries and within their
operational bounds
 Exercise internal data structures to ensure their validity
... our goal is to ensure that all
statements and conditions have
been executed at least once ...
logic errors and incorrect assumptions
are inversely proportional to a path's
execution probability
we often believe that a path is not
likely to be executed; in fact, reality is
often counter intuitive
typographical errors are random; it's
likely that untested paths will contain
some
First, we compute the cyclomatic
complexity:
number of simple decisions + 1
or
number of enclosed areas + 1
In this case, V(G) = 4
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.
V(G)
modules
modules in this range are
more error prone
Next, we derive the
independent paths:
Since V(G) = 4,
there are four paths
Path 1: 1,2,3,6,7,8
Path 2: 1,2,3,5,7,8
Path 3: 1,2,4,7,8
Path 4: 1,2,4,7,2,4,...7,8
Finally, we derive test
cases to exercise these
paths.
1
2
3
4
5 6
7
8
Nested
Loops
Concatenated
Loops
Simple
loop
requirements
eventsinput
output
user
queries mouse
picks
output
formats
prompts
FK
input
data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
Valid data
Invalid data
output
domain
user
queries mouse
picks
output
formats
prompts
FK
input
data
input domain
 error guessing methods
 decision table techniques
 cause effect graphing
 The first goal leads to validation testing
 You expect the system to perform correctly using a
given set of test cases that reflect the system’s
expected use.
 The second goal leads to defect testing
 The test cases are designed to expose defects. The
test cases in defect testing can be deliberately
obscure and need not reflect how the system is
normally used.
 Validation testing
 To demonstrate to the developer and the system customer that
the software meets its requirements
 A successful test shows that the system operates as intended.
 Defect testing
 To discover faults or defects in the software where its
behaviour is incorrect or not in conformance with its
specification
 A successful test is a test that makes the system perform
incorrectly and so exposes a defect in the system.
 Verification:
"Are we building the product right”.
 The software should conform to its
specification.
 Validation:
"Are we building the right product”.
 The software should do what the user really
requires.
 Aim of V & V is to establish confidence that the
system is ‘fit for purpose’.
 Depends on system’s purpose, user
expectations and marketing environment
 Software purpose
 The level of confidence depends on how critical the
software is to an organisation.
 User expectations
 Users may have low expectations of certain kinds of
software.
 Marketing environment
 Getting a product to market early may be more
important than finding defects in the program.
 Development testing, where the system is
tested during development to discover bugs
and defects.
 Release testing, where a separate testing team
test a complete version of the system before it
is released to users.
 User testing, where users or potential users of a
system test the system in their own
environment.
 Development testing includes all testing activities
that are carried out by the team developing the
system.
 Unit testing, where individual program units or object
classes are tested. Unit testing should focus on testing the
functionality of objects or methods.
 Component testing, where several individual units are
integrated to create composite components. Component
testing should focus on testing component interfaces.
 System testing, where some or all of the components in a
system are integrated and the system is tested as a whole.
System testing should focus on testing component
interactions.
 Unit testing is the process of testing individual
components in isolation.
 It is a defect testing process.
 Units may be:
 Individual functions or methods within an object
 Object classes with several attributes and methods
 Composite components with defined interfaces used
to access their functionality.
 System testing during development involves
integrating components to create a version of the
system and then testing the integrated system.
 The focus in system testing is testing the
interactions between components.
 System testing checks that components are
compatible, interact correctly and transfer the right
data at the right time across their interfaces.
 System testing tests the emergent behaviour of a
system.
 During system testing, reusable components
that have been separately developed and off-
the-shelf systems may be integrated with
newly developed components. The complete
system is then tested.
 Components developed by different team
members or sub-teams may be integrated at
this stage. System testing is a collective rather
than an individual process.
 In some companies, system testing may involve a
separate testing team with no involvement from
designers and programmers.
 Alpha testing
 Users of the software work with the development
team to test the software at the developer’s site.
 Beta testing
 A release of the software is made available to users
to allow them to experiment and to raise problems
that they discover with the system developers.
 Acceptance testing
 Customers test a system to decide whether or not it
is ready to be accepted from the system developers
and deployed in the customer environment.
Primarily for custom systems.
Software testing

Software testing

  • 2.
     Testing isthe process of exercising a program with the specific intent of finding errors prior to delivery to the end user.  Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put into use.  When you test software, you execute a program using artificial data.  You check the results of the test run for errors, anomalies or information about the program’s non-functional attributes.  Can reveal the presence of errors NOT their absence.  Testing is part of a more general verification and validation process, which also includes static validation techniques.
  • 3.
     To uncovererrors.  To demonstrate to the developer and the customer that the software meets its requirements.  For custom software, this means that there should be at least one test for every requirement in the requirements document. For generic software products, it means that there should be tests for all of the system features, plus combinations of these features, that will be incorporated in the product release.  To discover situations in which the behavior of the software is incorrect, undesirable or does not conform to its specification.  Defect testing is concerned with rooting out undesirable system behavior such as system crashes, unwanted interactions with other systems, incorrect computations and data corruption.
  • 4.
     Operability—it operatescleanly  Observability—the results of each test case are readily observed  Controlability—the degree to which testing can be automated and optimized  Decomposability—testing can be targeted  Simplicity—reduce complex architecture and logic to simplify tests  Stability—few changes are requested during testing  Understandability—of the design
  • 5.
  • 6.
    developer independent tester Understandsthe system but, will test "gently" and, is driven by "delivery" Must learn about the system, but, will attempt to break it and, is driven by quality
  • 7.
     These arethe basic principles that guide software testing:  All tests should be traceable to customer requirements  Tests should be planned long before testing begins  The Pareto principle applies to software testing  Pareto principle implies that 80% of all errors uncovered during testing will likely be traceable to 20% of all program components. The problem of course, is to isolate these suspect components and to thoroughly test them.  Testing should begin ‘in the small’ and progress toward testing ‘in the large’.
  • 8.
     Exhaustive testingis not possible  To be more effective, testing should be conducted by an independent third party
  • 9.
  • 10.
     White-Box testingis a test case design method that uses the control structure of the procedural design to derive test cases.  Using white-box testing methods, the software engineer can derive test cases that  Guarantee that all independent paths within a module have been exercised at least once  Exercise all logical decisions on their operational bounds  Execute all loops at their boundaries and within their operational bounds  Exercise internal data structures to ensure their validity
  • 11.
    ... our goalis to ensure that all statements and conditions have been executed at least once ...
  • 12.
    logic errors andincorrect assumptions are inversely proportional to a path's execution probability we often believe that a path is not likely to be executed; in fact, reality is often counter intuitive typographical errors are random; it's likely that untested paths will contain some
  • 13.
    First, we computethe cyclomatic complexity: number of simple decisions + 1 or number of enclosed areas + 1 In this case, V(G) = 4
  • 14.
    A number ofindustry studies have indicated that the higher V(G), the higher the probability or errors. V(G) modules modules in this range are more error prone
  • 15.
    Next, we derivethe independent paths: Since V(G) = 4, there are four paths Path 1: 1,2,3,6,7,8 Path 2: 1,2,3,5,7,8 Path 3: 1,2,4,7,8 Path 4: 1,2,4,7,2,4,...7,8 Finally, we derive test cases to exercise these paths. 1 2 3 4 5 6 7 8
  • 16.
  • 17.
  • 18.
  • 19.
    user supplied commands responsesto system prompts file names computational data physical parameters bounding values initiation values output data formatting responses to error messages graphical data (e.g., mouse picks) data outside bounds of the program physically impossible data proper value supplied in wrong place Valid data Invalid data
  • 20.
  • 21.
     error guessingmethods  decision table techniques  cause effect graphing
  • 22.
     The firstgoal leads to validation testing  You expect the system to perform correctly using a given set of test cases that reflect the system’s expected use.  The second goal leads to defect testing  The test cases are designed to expose defects. The test cases in defect testing can be deliberately obscure and need not reflect how the system is normally used.
  • 23.
     Validation testing To demonstrate to the developer and the system customer that the software meets its requirements  A successful test shows that the system operates as intended.  Defect testing  To discover faults or defects in the software where its behaviour is incorrect or not in conformance with its specification  A successful test is a test that makes the system perform incorrectly and so exposes a defect in the system.
  • 24.
     Verification: "Are webuilding the product right”.  The software should conform to its specification.  Validation: "Are we building the right product”.  The software should do what the user really requires.
  • 25.
     Aim ofV & V is to establish confidence that the system is ‘fit for purpose’.  Depends on system’s purpose, user expectations and marketing environment  Software purpose  The level of confidence depends on how critical the software is to an organisation.  User expectations  Users may have low expectations of certain kinds of software.  Marketing environment  Getting a product to market early may be more important than finding defects in the program.
  • 26.
     Development testing,where the system is tested during development to discover bugs and defects.  Release testing, where a separate testing team test a complete version of the system before it is released to users.  User testing, where users or potential users of a system test the system in their own environment.
  • 27.
     Development testingincludes all testing activities that are carried out by the team developing the system.  Unit testing, where individual program units or object classes are tested. Unit testing should focus on testing the functionality of objects or methods.  Component testing, where several individual units are integrated to create composite components. Component testing should focus on testing component interfaces.  System testing, where some or all of the components in a system are integrated and the system is tested as a whole. System testing should focus on testing component interactions.
  • 28.
     Unit testingis the process of testing individual components in isolation.  It is a defect testing process.  Units may be:  Individual functions or methods within an object  Object classes with several attributes and methods  Composite components with defined interfaces used to access their functionality.
  • 29.
     System testingduring development involves integrating components to create a version of the system and then testing the integrated system.  The focus in system testing is testing the interactions between components.  System testing checks that components are compatible, interact correctly and transfer the right data at the right time across their interfaces.  System testing tests the emergent behaviour of a system.
  • 30.
     During systemtesting, reusable components that have been separately developed and off- the-shelf systems may be integrated with newly developed components. The complete system is then tested.  Components developed by different team members or sub-teams may be integrated at this stage. System testing is a collective rather than an individual process.  In some companies, system testing may involve a separate testing team with no involvement from designers and programmers.
  • 31.
     Alpha testing Users of the software work with the development team to test the software at the developer’s site.  Beta testing  A release of the software is made available to users to allow them to experiment and to raise problems that they discover with the system developers.  Acceptance testing  Customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment. Primarily for custom systems.