ITIS-3320-091 
Test Levels
2 
Coverage in software testing 
 Coverage: The degree, expressed as a percentage, to 
which a specified item has been exercised by a test 
suite 
 Branch coverage: % the branches that have been exercised 
 Code coverage: % which parts of the software has been 
executed 
 Statement coverage: % executable statements have been 
exercised 
 Coverage tool: a tool that provides objective 
measures of what structural elements, e.g. statements 
branches have been exercised by the test suite
3 
Test Levels 
Whichever development model is selected, a number of 
design activities are executed: 
Creation of components 
Integration of these components together 
When all components have been integrated, it is 
necessary ensure that the system works properly in 
end-to-end 
Ensure that the system is accepted by the users, so 
that it can be delivered.
4 
V-model 
Using the V-model to group the testing tasks in four 
levels: 
Component tests (Unit tests) 
Components integration tests 
System tests, on the completely integrated system 
Acceptance tests, a prerequisite for delivery to the 
market or production
5 
Aspects of testing activities 
 Test object: the target of the test, be it a function, a sub-program, 
or a program, a software application, or a 
system made up of different sub-systems 
 Specific objectives: associated with that activity, which 
are the reasons why the tests will be executed. These can 
be to discover certain types of defects, to ensure correct 
operation, or provide any other type of information (such 
as coverage) 
 Test basis: a referential or set of information that can be 
used to define what the test object is supposed to do 
 Entry and exit criteria: which will define when a task can 
start (the pre-requisites) and when it can be considered as 
finished
6 
Component test (Unit Test) 
 Test object: components, program modules, 
functions, programs, database modules, SQL request, 
depending on the granularity of the software or 
system to test 
 Objective: detect failures in the components, verifies 
whether the mode of operation of the component, 
module, program, object, class, etc. is functional or 
non-functional 
 Reference materials: requirements applicable to the 
component, detailed specification, source code, 
algorithms
7 
Component test (Unit Test) 
 Entry criteria: the component is available, compiled 
and executed in the test environment; the 
specifications are available and stable. 
 Exit criteria: the required coverage level, functional 
and technical (or non-functional), has been reached; 
defects found have been corrected; and the 
corrections have been verified; regression tests on 
the rest of the component have been executed on the 
last version of the software.
8 
Integration level testing 
 Test object: components, infrastructure, interfaces, 
database systems, and file systems 
 Objective: detect failures in the interfaces and 
exchanges between components 
 Reference materials: preliminary and detailed design 
documentation for the software or system, software or 
system architecture, use cases, workflow, etc. 
 Entry criteria: components that must exchange data are 
available, and have passed component test successfully 
 Exit criteria: all components have been integrated and 
all message types (sent or received) have been 
exchanged without any defect for each existing interface
9 
Integration level testing 
 Component integration testing focuses on the 
interfaces between components and between the 
different parts of the software and the system 
(including the hardware). 
 This includes the interface with the operating system, 
file systems, database systems, hardware, and 
software (protocol, message, etc.); interfaces inside 
the system or between systems
10 
Ways to integrate components 
 Big bang integration 
 Bottom-up integration 
 Top-down integration 
 Other types of integration
11 
System tests 
 Test object: the complete software or system, its 
documentation (user manual, maintenance and 
installation documentation, etc.) 
 Objective: detect failures in the software, to ensure that 
it corresponds to the requirements and specifications, 
and that it can be accepted by the users 
 Entry criteria: all components have been correctly 
integrated, all components are available 
 Exit criteria: the functional and technical level of 
coverage has been reached; must-fix defects have been 
corrected and their fixes have been verified; regression 
tests have been executed on the last version of the SW
12 
Acceptance tests 
 Test object: the complete software or system, its 
documentation, all necessary configuration items. 
 Objective: obtain customer or user acceptance of the 
software 
 Reference material: specifications, and requirements 
for the system or software, use case, risk analysis 
 Entry criteria: all components have been correctly 
tested 
 Exit criteria: the expected coverage level have been 
reached; must-fix defects have been corrected and the 
fixes have been verified; regression tests have been 
executed on the latest version of the software
13 
Types of tests 
 You probably have a car, and selected it based on 
criteria that were specific to you, such as price, color, 
brand, horsepower, reliability, comfort, miles to 
gallons, speed, price of spare parts or of 
maintenance, etc. 
 To go from “A” to “B” was not the determining 
factor (all vehicles have that functionality) 
 Your selection was thus mostly toward non-functional 
characteristics
14 
Functional tests 
 Functional tests focus on the functions of the 
software, what they do, the services provided, and 
the requirements covered 
 Functional tests include (ISO 9126): 
 Suitability 
 Accuracy 
 Interoperability 
 Security 
 Functional compliance
15 
Non-functional tests 
 Non-functional tests focus on the way the services 
are provided 
 Can be executed at all test levels (unit, integration, 
system, acceptance) 
 Non-functional aspects of the ISO 9126: 
 Reliability 
 Usability 
 Efficiency 
 Maintainability 
 Portability
16 
Tests associated with changes 
 When a defect has been corrected, two types of tests 
should be executed: 
1. Confirmation tests or retests: which focus on verifying 
that the defect has been corrected and the software 
operates as expected; and 
2. Regression tests: that will make sure that the correction 
did not introduce any side effects (regression) on the rest 
of the software 
 Regression tests and impact analysis of modification 
are very important during maintenance and 
correction of software
Impact analysis 
 We see the structure 
of calls between 
components of a 
software 
 If components 22 
must be updated, 
then components 1, 
23, 24, 25, 26, and 
27 will have to be 
verified 
 Indirectly impacted 
components must be 
verified
18 
Test and maintenance 
 Software maintenance applies to modifications of 
software already used or delivered to the market 
 During maintenance, evolutions or fixes are usually 
caused by external events such as changes in 
regulations that translates in a corresponding change 
in the software. 
 Maintenance testing has constraints: 
 Timing constraints, for development and testing 
 Impact constraints, to ensure that other functionalities are 
not impacted 
 As times goes on, changes and fixes are added, 
evolution occurs and new, functionalities are 
introduced.
Bathtub curve 
 The curve illustrates the evolution associated with the 
software life cycle 
 “A” During the design phase the number of defects 
decreases until the software is provided to the market 
 “B” maintenance becomes too large and justify retirements

9 test_levels-

  • 1.
  • 2.
    2 Coverage insoftware testing  Coverage: The degree, expressed as a percentage, to which a specified item has been exercised by a test suite  Branch coverage: % the branches that have been exercised  Code coverage: % which parts of the software has been executed  Statement coverage: % executable statements have been exercised  Coverage tool: a tool that provides objective measures of what structural elements, e.g. statements branches have been exercised by the test suite
  • 3.
    3 Test Levels Whichever development model is selected, a number of design activities are executed: Creation of components Integration of these components together When all components have been integrated, it is necessary ensure that the system works properly in end-to-end Ensure that the system is accepted by the users, so that it can be delivered.
  • 4.
    4 V-model Usingthe V-model to group the testing tasks in four levels: Component tests (Unit tests) Components integration tests System tests, on the completely integrated system Acceptance tests, a prerequisite for delivery to the market or production
  • 5.
    5 Aspects oftesting activities  Test object: the target of the test, be it a function, a sub-program, or a program, a software application, or a system made up of different sub-systems  Specific objectives: associated with that activity, which are the reasons why the tests will be executed. These can be to discover certain types of defects, to ensure correct operation, or provide any other type of information (such as coverage)  Test basis: a referential or set of information that can be used to define what the test object is supposed to do  Entry and exit criteria: which will define when a task can start (the pre-requisites) and when it can be considered as finished
  • 6.
    6 Component test(Unit Test)  Test object: components, program modules, functions, programs, database modules, SQL request, depending on the granularity of the software or system to test  Objective: detect failures in the components, verifies whether the mode of operation of the component, module, program, object, class, etc. is functional or non-functional  Reference materials: requirements applicable to the component, detailed specification, source code, algorithms
  • 7.
    7 Component test(Unit Test)  Entry criteria: the component is available, compiled and executed in the test environment; the specifications are available and stable.  Exit criteria: the required coverage level, functional and technical (or non-functional), has been reached; defects found have been corrected; and the corrections have been verified; regression tests on the rest of the component have been executed on the last version of the software.
  • 8.
    8 Integration leveltesting  Test object: components, infrastructure, interfaces, database systems, and file systems  Objective: detect failures in the interfaces and exchanges between components  Reference materials: preliminary and detailed design documentation for the software or system, software or system architecture, use cases, workflow, etc.  Entry criteria: components that must exchange data are available, and have passed component test successfully  Exit criteria: all components have been integrated and all message types (sent or received) have been exchanged without any defect for each existing interface
  • 9.
    9 Integration leveltesting  Component integration testing focuses on the interfaces between components and between the different parts of the software and the system (including the hardware).  This includes the interface with the operating system, file systems, database systems, hardware, and software (protocol, message, etc.); interfaces inside the system or between systems
  • 10.
    10 Ways tointegrate components  Big bang integration  Bottom-up integration  Top-down integration  Other types of integration
  • 11.
    11 System tests  Test object: the complete software or system, its documentation (user manual, maintenance and installation documentation, etc.)  Objective: detect failures in the software, to ensure that it corresponds to the requirements and specifications, and that it can be accepted by the users  Entry criteria: all components have been correctly integrated, all components are available  Exit criteria: the functional and technical level of coverage has been reached; must-fix defects have been corrected and their fixes have been verified; regression tests have been executed on the last version of the SW
  • 12.
    12 Acceptance tests  Test object: the complete software or system, its documentation, all necessary configuration items.  Objective: obtain customer or user acceptance of the software  Reference material: specifications, and requirements for the system or software, use case, risk analysis  Entry criteria: all components have been correctly tested  Exit criteria: the expected coverage level have been reached; must-fix defects have been corrected and the fixes have been verified; regression tests have been executed on the latest version of the software
  • 13.
    13 Types oftests  You probably have a car, and selected it based on criteria that were specific to you, such as price, color, brand, horsepower, reliability, comfort, miles to gallons, speed, price of spare parts or of maintenance, etc.  To go from “A” to “B” was not the determining factor (all vehicles have that functionality)  Your selection was thus mostly toward non-functional characteristics
  • 14.
    14 Functional tests  Functional tests focus on the functions of the software, what they do, the services provided, and the requirements covered  Functional tests include (ISO 9126):  Suitability  Accuracy  Interoperability  Security  Functional compliance
  • 15.
    15 Non-functional tests  Non-functional tests focus on the way the services are provided  Can be executed at all test levels (unit, integration, system, acceptance)  Non-functional aspects of the ISO 9126:  Reliability  Usability  Efficiency  Maintainability  Portability
  • 16.
    16 Tests associatedwith changes  When a defect has been corrected, two types of tests should be executed: 1. Confirmation tests or retests: which focus on verifying that the defect has been corrected and the software operates as expected; and 2. Regression tests: that will make sure that the correction did not introduce any side effects (regression) on the rest of the software  Regression tests and impact analysis of modification are very important during maintenance and correction of software
  • 17.
    Impact analysis We see the structure of calls between components of a software  If components 22 must be updated, then components 1, 23, 24, 25, 26, and 27 will have to be verified  Indirectly impacted components must be verified
  • 18.
    18 Test andmaintenance  Software maintenance applies to modifications of software already used or delivered to the market  During maintenance, evolutions or fixes are usually caused by external events such as changes in regulations that translates in a corresponding change in the software.  Maintenance testing has constraints:  Timing constraints, for development and testing  Impact constraints, to ensure that other functionalities are not impacted  As times goes on, changes and fixes are added, evolution occurs and new, functionalities are introduced.
  • 19.
    Bathtub curve The curve illustrates the evolution associated with the software life cycle  “A” During the design phase the number of defects decreases until the software is provided to the market  “B” maintenance becomes too large and justify retirements