4. 4
TOP-DOWN INTEGRATION
• The top-down strategy starts with the top, or initial,
module in the program [4].
• There is no single ‘‘right’’ procedure for selecting the next module to be
incrementally tested;
• The only rule is that to be eligible to be the next module, at least one of the
module’s subordinate (calling) modules must have been tested previously.
• Top-down integration testing is an incremental approach
to construction of the software architecture [1].
• Modules are integrated by moving downward through the control
hierarchy, beginning with the main control module (main program).
• Modules subordinate to the main control module are incorporated into the
structure in either a depth-first or breadth-first manner.
5. 5
SAMPEL
Look at the picture!
How do we do the test in a depth-first or
breadth-first manner ??
6. 6
DEPTH-FIRST INTEGRATION
• Integrates all components on a major control path of the
program structure.
• Selection of a major path is somewhat arbitrary and
depends on application-specific characteristics (e.g.,
components needed to implement one use case).
• For example, selecting the left-hand path:
• Components M1, M2, M5 would be integrated first.
• Next, M8 or (if necessary for proper functioning of M2) M6 would be
integrated.
• Then, the central and right-hand control paths are built.
7. 7
BREADTH-FIRST INTEGRATION
• Incorporates all components directly subordinate at
each level, moving across the structure horizontally.
• Components M2, M3, and M4 would be integrated first.
• The next control level, M5, M6, and so on, follows.
8. 8
THE INTEGRATION PROCESS
Performed in a series of five steps:
1. The main control module is used as a test driver, and stubs are substituted for all
components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),
subordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing (discussed later in this section) may be conducted to ensure that
new errors have not been introduced.
The process continues from step 2 until the entire program structure is
built.
9. 9
TOP-DOWN INTEGRATION(cont…)
• The top-down integration strategy verifies major control
or decision points early in the test process.
• In a “well-factored” program structure, decision making
occurs at upper levels in the hierarchy and is therefore
encountered first.
• If major control problems do exist, early recognition is
essential.
• If depth-first integration is selected, a complete function
of the software may be implemented and
demonstrated.
10. 10
TOP-DOWN INTEGRATION(cont…)
• Early demonstration of functional
capability is a confidence builder for all
stakeholders.
• This approach looks uncomplicated, but in
fact, logistical problems will arise, because
the lower-level processes of the hierarchy
are required to test the higher-level ones.
• Stubs replace lower-level modules at the
start of top-down testing; hence no data
flows upward from the program structure.
11. 11
• Tester has only 3 options:
1. Delay most tests until the stubs are replaced
with the actual module:
• Results in a loss of some control over the relationship
between the particular test and the particular module.
• Difficult to determine the causes of errors and the
tendency to violate the limitations of the top-down
approach.
2. Develop stubs which have limited functionality
to simulate the actual module, may be feasible,
but will increase the overhead as the stubs
become more complex.
3. Integrate software from the bottom up in a
hierarchy, known as bottom-up integration.
12. 12
BOTTOM-UP INTEGRATION
• It begins construction and testing with
atomic modules (i.e., components at the
lowest levels in the program structure).
• Bottom-up integration eliminates the need
for complex stubs.
• Because components are integrated from the
bottom up, the functionality provided by
components subordinate to a given level is
always available and the need for stubs is
eliminated.
13. 13
BOTTOM-UP INTEGRATION (cont…)
• Bottom-up testing (or bottom-up development)
is often mistakenly equated with non-
incremental testing.
• The reason is that bottom-up testing begins in a
manner that is identical to a non-incremental
test, but as we saw in the previous section,
bottom-up testing is an incremental strategy.
• The advantages of top-down testing become
the disadvantages of bottom-up testing, vice
versa.
14. 14
BOTTOM-UP INTEGRATION STRATEGY
• Begins with the terminal modules in the program (the modules that do not
call other modules).
• After these modules have been tested
• There is no best procedure for selecting the next module to be incrementally tested;
• The only rule is that to be eligible to be the next module, all of the module’s subordinate
modules (the modules it calls) must have been tested previously.
• A bottom-up integration strategy may be implemented with the following
steps:
1. Low-level components are combined into clusters (sometimes called builds) that perform
a specific software subfunction.
2. A driver (a control program for testing) is written to coordinate test-case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined, moving upward in the program structure.
16. 16
• Integration follows the pattern illustrated in the Picture.
• Components are combined to form clusters 1, 2, and 3.
• Each of the clusters is tested using a driver (shown as a dashed block).
• Components in clusters 1 and 2 are subordinate to Ma.
• Drivers D1 and D2 are removed, and the clusters are interfaced directly to
Ma.
• Similarly, driver D3 for cluster 3 is removed prior to integration with
module Mb.
• Both Ma and Mb will ultimately be integrated with component Mc, and so
forth.
• As integration moves upward, the need for separate test drivers
lessens.
• In fact, if the top two levels of program structure are integrated
top down, the number of drivers can be reduced substantially and
integration of clusters is greatly simplified.
17. 17
COMPARISON
ADVANTAGES
TOP-DOWN INTEGRATION BOTTOM-UP INTEGRATION
Advantageous when major flaws occur toward the
top of the program.
Advantageous when major flaws occur toward the
bottom of the program.
Once the I/O functions are added, representation
of cases is easier
Test conditions are easier to create.
Early skeletal program allows demonstrations and
boosts morale.
Observation of test results is easier.
18. 18
COMPARISON
DISADVANTAGES
TOP-DOWN INTEGRATION BOTTOM-UP INTEGRATION
1. Stub modules must be produced.
2. Stub modules are often more complicated
than they first appear to be.
3. Before the I/O functions are added, the
representation of test cases in stubs can be
difficult.
4. Test conditions may be impossible, or very
difficult, to create.
5. Observation of test output is more difficult.
6. Leads to the conclusion that design and
testing can be overlapped.
7. Defers the completion of testing certain
modules.
1. Driver modules must be produced.
2. The program as an entity does not exist until
the last module is added.
20. 20
DEFINITION
• Continuous integration is the practice of merging
components into the evolving software increment
once or more each day.
• This is a common practice for teams following agile
development practices such as XP or DevOps.
• Integration testing must take place quickly and
efficiently if a team is attempting to always have a
working program in place as part of continuous delivery.
21. 21
SMOKE TESTING
• Smoke testing is an integration testing approach that can be used
when product software is developed by an agile team using short
increment build times.
• Smoke testing is an integration testing approach that is often used
when “small limited” software products are created.
• Designed as a mechanism to deal with the time criticality of a project,
it allows the software team to run the project on a frequency basis.
• Smoke testing might be characterized as a rolling
or continuous integration strategy.
22. 22
SMOKE TESTING ACTIVITIES
• In essence, the smoke-testing approach encompasses the following
activities:
1. Software components that have been translated into code are integrated into a build. A
build includes all data files, libraries, reusable modules, and engineered components that
are required to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “show-stopper” errors that have
the highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds, and the entire product (in its current form) is
smoke tested daily. The integration approach may be top down or bottom up.
23. 23
McConnell describes the smoke test in the
following manner:
• The smoke test should exercise the entire system
from end to end.
• It does not have to be exhaustive, but it should be
capable of exposing major problems.
• The smoke test should be thorough enough that if
the build passes, you can assume that it is stable
enough to be tested more thoroughly.
24. 24
THE BENEFITS
Smoke testing provides a number of benefits when it
is applied on complex, time-critical software projects:
1. Integration risk is minimized.
• Because smoke tests are conducted daily, incompatibilities and
other showstopper errors are uncovered early, thereby reducing
the likelihood of serious schedule impact when errors are
uncovered.
2. The quality of the end product is improved
• Because the approach is construction (integration) oriented, smoke
testing is likely to uncover functional errors as well as architectural
and component-level design errors. If these errors are corrected
early, better product quality will result.
25. 25
THE BENEFITS ( cont…)
3. Error diagnosis and correction are simplified
• Errors uncovered during smoke testing are likely to be associated
with “new software increments”—that is, the software that has just
been added to the build(s) is a probable cause of a newly
discovered error.
4. Progress is easier to assess
• More of the software has been integrated and more has been
demonstrated to work. This improves team morale and gives
managers a good indication that progress is being made.
26. 26
INTERFACE TESTING
• Done if the modules and sub-systems
are integrated and form a larger system
• Its purpose is to detect errors in the
interface or invalid assumptions about
the interface.
• It is very important for testing the
system development using an object-
oriented approach that is defined by its
objects
27. 27
REGRESSION TESTING
• Each time a new module is added as part of
integration testing, the software changes.
• New data flow paths are established, new
input/output (I/O) may occur, and new control logic is
invoked.
• Side effects associated with these changes may cause
problems with functions that previously worked
flawlessly.
28. 28
DEFINITION
• Regression testing is the re-execution of some subset
of tests that have already been conducted to ensure
that changes have not propagated unintended side
effects.
• Regression tests should be executed every time a
major change is made to the software (including the
integration of new components).
• Regression testing helps to ensure that changes (due
to testing or for other reasons) do not introduce
unintended behavior or additional errors.
29. 29
PROCESS
• Regression testing may be conducted
manually, by re-executing a subset of all
test cases or using automated
capture/playback tools.
• Capture/playback tools enable the
software engineer to capture test cases
and results for subsequent playback and
comparison.
30. 30
TEST CASES
• The regression test suite (the subset of tests to be executed) contains
three different classes of test cases:
• A representative sample of tests that will exercise all software functions
• Additional tests that focus on software functions that are likely to be affected by the change
• Tests that focus on the software components that have been changed
• As integration testing proceeds, the number of regression tests can
grow quite large.
• Therefore, the regression test suite should be designed to include
only those tests that address one or more classes of errors in each of
the major program functions.
31. 31
• Therefore, regression tests should be designed to
cover only the same test or several classes of errors
in each of the major program functions.
• It is impractical and inefficient to re-execute every
test for every function of the program when a
change occurs.
32. • When integration testing is carried out, the tester must
identify critical modules. Characteristics of critical modules:
• Depends on some software requirements.
• Have a high level of control.
• Has high complexity (cyclomatic complexity is used as an
indicator).
• Have certain defined performance requirements.
• Critical modules should be tested as early as possible.
• In addition, regression testing should focus
on critical module functions
CONCLUTION
34. 34
TESTING PURPOSE
• System testing is not a process of testing the functions of the
complete system or program, because this would be redundant with
the process of function testing.
• System testing has a particular purpose: to compare the system or
program to its original objectives.
• Given this purpose, consider these two implications:
1. System testing is not limited to systems. If the product is a program, system testing is the
process of attempting to demonstrate how the program, as a whole, fails to meet its
objectives.
2. System testing, by definition, is impossible if there is no set of written, measurable
objectives for the product.
35. 35
CATEGORIES OF TEST CASES
CATEGORY DESCRIPTION
Facility Ensure that the functionality in the objectives is implemented.
Volume Subject the program to abnormally large volumes of data to process.
Stress Subject the program to abnormally large loads, generally concurrent
processing.
Usability Determine how well the end user can interact with the program.
Security Try to subvert the program’s security measures.
Performance Determine whether the program meets response and throughput
requirements.
Storage Ensure the program correctly manages its storage needs, both system and
physical.
Configuration Check that the program performs adequately on the recommended
configurations.
36. 36
CATEGORIES OF TEST CASES
CATEGORY DESCRIPTION
Compatibility/ Conversion Determine whether new versions of the program are compatible with
previous releases.
Installation Ensure the installation methods work on all supported platforms.
Reliability Determine whether the program meets reliability specifications such as
uptime and MTBF
Recovery Test whether the system’s recovery facilities work as designed
Serviceability/
Maintenance
Determine whether the application correctly provides mechanisms to yield
data on events requiring technical support.
Documentation Validate the accuracy of all user documentation.
Procedure Determine the accuracy of special procedures required to use or maintain the
program.
37. 37
PERFORMING THE SYSTEM TEST
• One of the most vital considerations in implementing the system test
is determining who should do it.
• To answer this in a negative way,
1) Programmers should not perform a system test : a person performing a system test must
be capable of thinking like an end user, which implies a thorough understanding of the
attitudes and environment of the end user and of how the program will be used.
2) Of all the testing phases, this is the one that the organization responsible for developing
the programs definitely should not perform : a system test is an ‘‘anything goes, no holds
barred’’ activity. The system test should be performed by an independent group of people
with few, if any, ties to the development organization
38. 38
ACCEPTANCE TESTING
• Acceptance testing is the process of comparing the program to its
initial requirements and the current needs of its end users.
• It is an unusual type of test in that it usually is performed by the program’s customer or end
user and normally is not considered the responsibility of the development organization.
• The contracting (user) organization performs the acceptance test by
comparing the program’s operation to the original contract.
• The best way to do this is to devise test cases that attempt to show
that the program does not meet the contract;
• If these test cases are unsuccessful, the program is accepted.
39. 39
INSTALLATION TESTING
• It is an unusual type of testing because its purpose is not to find
software errors but to find errors that occur during the installation
process.
• Many events occur when installing software systems, such as:
• User must select a variety of options.
• Files and libraries must be allocated and loaded.
• Valid hardware configurations must be present.
• Programs may need network connectivity to connect to other programs.
• The organization that produced the system should develop the
installation tests, which should be delivered as part of the system,
and run after the system is installed.
40. 40
TEST COMPLETION CRITERIA
• One of the most difficult questions to answer
when testing a program is determining when to
stop ?? since there is no way of knowing if the
error just detected is the last remaining error.
• The two most common criteria are these:
1. Stop when the scheduled time for testing expires.
2. Stop when all the test cases execute without
detecting errors—that is, stop when the test cases
are unsuccessful.
41. 41
CONTRADICTION
• The first criterion is useless because you can satisfy
it by doing absolutely nothing. It does not measure
the quality of the testing.
• The second criterion is equally useless because it
also is independent of the quality of the test cases.
• Furthermore, it is counterproductive because it
subconsciously encourages you to write test cases
that have a low probability of detecting errors.
42. 42
BETTER TEST COMPLETION CRITERIA
• There are three categories of more useful criteria.
• The first category, but not the best, is to base completion on the use of
specific test-case design methodologies.
• The second category of criteria—perhaps the most valuable one—is to state
the completion requirements in positive terms.
• The third type of completion criterion – an easy one on the surface, but it
involves a lot of judgment and intuition – is to plot the number of errors
found per unit time during the test phase and examining the shape of the
curve so you can often determine whether to continue the test phase or end
it and begin the next test phase.
43. References
Lewis, W. E. (2009). Software Testing And Continuous Quality
Improvement ed. 3rd. Auerbach publications.
02
Majchrzak, T. A. (2012). Improving Software Testing: Technical And
Organizational Developments. Springer Science & Business Media.
03
Myers, G. J., Sandler, C., & Badgett, T. (2012). The Art Of Software
Testing. John Wiley & Sons.
04
Roger, S. P., & Bruce, R. M. (2019). Software Engineering: A
Practitioner’s Approach Ed.9th. McGraw-Hill Education.
01
Bourque, P., Dupuis, R., Abran, A., Moore, J. W., & Tripp, L. (2014).
The guide to the software engineering body of knowledge. IEEE
software.
05