Software Engineering Techniques
and Software Testing Strategy
Sir Junaid Arshad
University of Education
Software Engineering Techniques for SoS
Software engineering paradigms and techniques that could help tackle some
of the challenges associated with the development of systems of systems, and
which are therefore likely to be part of the SoS development frameworks of the
future. These are:
Service-oriented architectures (SOA):
SoS development involves the integration and secure interoperation of vastly
diverse technical systems [3, 5, 6, 10, 13, 18, 26]. Thanks to their platform
independence, loose coupling and support for security, SOA solutions 
represent strong candidates for implementing new computer systems or front-ends
to legacy systems that need to be integrated into an SoS.
Policy-based autonomic computing :
Ecosystems, cities and economies are often pointed out as examples of
eﬀective systems of systems. A common characteristic of all these systems of
systems is the way in which their global objectives are speciﬁed through high-level
incentives, rewards and penalties rather than by setting concrete, precise targets
[10, 24, 25]. Thus, the behavior of ecosystems is governed by laws of nature. The
development and everyday life of cities are subject to common or civil laws and
regulations. The evolution of economies is guided by taxation policies. If these
successful real-world examples are to be followed, techniques will be required that
can convey the global objectives of systems of systems as high-level policies to
their autonomous components. (Policy-based) autonomic computing addresses the
development of systems that can manage themselves based on a set of high-level
policies , and therefore represents an ideal paradigm for developing the
computer-system components of an SoS.
A major concern of systems of systems is their ability to achieve an overall
objective in predictable and dependable ways, through the collaboration of
component systems with diﬀerent (and potentially conﬂicting) local goals [10, 24,
28]. Formal veriﬁcation, and in particular model checking  and probabilistic
model checking , comprise a range of techniques that could be used or adapted
for use in the veriﬁcation of SoS policies, and ultimately for SoS dependability
management and assurance.
Model-driven development and code generation:
The open, evolving nature of systems of systems allows their components to
join and leave dynamically [24, 28]. Having SoS components collaborate with peer
systems whose characteristics are often unknown until runtime is a major
challenge. A combination of model-driven development and runtime code
generation in which a dynamically acquired model of a peer system is used to
generate the necessary interfaces and logic for collaborating with this peer system
 represents a promising approach to addressing this challenge.
Component-based development SoS
engineering requires the integration of
existing and future commercial, open
source and proprietary systems, and
provides techniques that can help
achieve this goal
Resource discovery :
In the era of mobile computing,
SoS components are expected to
actively seek partner systems and
establish collaborations with them,
thus joining (and leaving) looselycoupled federations of systems on a regular basis
[5, 10, 24]. The rich spectrum of resource discovery techniques employed by
today’s distributed (e.g., grid- and web-based) computer systems  can be used
as a basis for the development of techniques to support these capabilities.
3 A Framework for System-of-Systems
Our approach to
integrating the software engineering
techniques analysed in the previous
section involves the use of a
reconﬁgurable policy engine with the
structure in Figure 1.
The SOA implementation of this
policy engine as a web service  takes
a model of a system and a set of
policies, and ensures that the system
achieves the objectives speciﬁed by
these policies by adapting continually
to changes in its environment. The
system model has the form
model = (S, C , f ), (1)
where S and C represent the state and
conﬁguration parameters of the
modelled system, and the partial
f : S × C 7→ S
is a (possibly incomplete) speciﬁcation of the system behavior. Thus, for any
current state s ∈ S and for any conﬁguration c ∈ C such that (s, c) ∈ domf , f (s, c)
represents the future state of the system. When a running instance of the policy
engine is dynamically reconﬁgured by means of a model (1), its runtime code
generator employs model-driven development techniques to produce manageability
adaptor proxies . The monitor and control interfaces of these proxies allow the
engine to read the state and to modify the conﬁguration of the system components,
Software Testing Strategy
Strategic approach to software testing
Testing is a set of activities that can be planned in advance and conducted
systematically. For this reason a template for software testing -- a set of steps into
which we can place specific test case design techniques and testing methods -should be defined for the software process.
A number of software testing strategies have been proposed in the literature. All
provide the software developer with a template for testing and all have the
following generic characteristics.
Testing begins at the component level and works 'outward' toward the
integration of the entire computer-based system,
Different testing techniques are appropriate at different points in time.
Testing is conducted by the developer of the software and (for large
projects) an independent test group.
Organizing for Software Testing
For every software project, there is an inherent conflict of interest that
occurs as testing begins. The people who have built the software are now asked to
test the software. This seems harmless in itself; after all, who knows the program
better than its developers? Unfortunately, these same developers have a vested
interest in demonstrating that the program is error free, that it works according to
customer requirements, and that it will be completed on schedule and within
budget. Each of these interests mitigate against thorough testing.
There are often a number of misconceptions that can be erroneously inferred from
the preceding discussion:
That the developer of software should do no testing at all.
That the software should be ‘tossed over the wall’ to strangers who will test
That tester gets involved with the project only when the testing steps are
about to begin.
A software Testing Strategy
The software engineering process may be viewed as the spiral illustrated in
figure below. Initially, system engineering defines the role of software and leads to
software requirements analysis. Where the information domain, function, behavior,
performance, constraints. and validation criieria for software are established.
A strategy for software testing may also be
viewed in the context of the spiral (figure
above). Unit testing begins at the vortex of
the spiral and concentrates on each unit (i.e,
component) of the software as implemented
in source code. Testing progresses by moving
outward along the spiral to integration
testing, where the focus is on design and the
construction of the software architecture.
Taking another turn outward on the spiral, we
encounter validation testing, where requirements established as part of software
requirements analysis are validated against the software that has been constructed.
Finally, we arrive at system testing, where the software and other system elements
are tested as a whole. To test computer software, we spiral out along streamlines
that broaden the scope of testing with each turn.
Unit testing focuses verification effort on the smallest unit of software
design -- the software component or module. Using the component-level design
description as a guide, important control paths are tested to uncover errors within
the boundary of the module. The relative complexity of tests and uncovered errors
is limited by the constrained scope established for unit testing. The unit test is
white-box oriented, and the step can be conducted in parallel for multiple
Unit Test Considerations
The tests that occur as part of unit tests are illustrated schematically in
Figure below. The module interface is tested to ensure that information properly
flows into and out of the program unit under test. The local data structure is
examined to ensure that data stored temporarily maintains its integrity during all
steps in an algorithm's execution. Boundary conditions are tested to ensure that the
module operates properly at boundaries established to limit or restrict processing.
All independent paths (basis paths) through the control
structure are exercised to ensure that all statements in a
module have been executed at least once. And finally,
all error handling paths are tested.
Among the more common errors in computation are :
Misunderstood or incorrect arithmetic
mixed mode operations,
Incorrect symbolic representation of an expression.
Test cases should uncover errors such as
Comparison of different data types,
incorrect logical operators or precedence,
expectation of equality when precision error makes equality unlikely,
incorrect comparison of variables,
improper or nonexistent loop termination,
failure to exit when divergent iteration is encountered, and
Improperly modified loop variables.
Unit Test Procedures
Unit testing is normally considered as an adjunct to the coding step. After
source level code has been developed, reviewed, and verified for correspondence
to component-level design, unit test case design begins. A review of design
information provides guidance for establishing test cases that are likely to uncover
errors in each of the categories discussed earlier. Each test case should be coupled
with a set of expected results.
Because a component is not a stand-alone program, driver and/or stub software
must be developed for each unit test. The unit test environment is illustrated in
figure below. In most applications a driver is nothing more than a 'main program'
that accepts test case data, passes such data to the component. (to be tested), and
prints relevant results. Stubs serve to replace modules that are subordinate (called
by) the component to be tested. A stub or ‘dummy subprogram’ uses the
subordinate modules’s interface, may do minimal
data manipulation, prints verification of entry and
returns control to the module undergoing testing.
A neophyte in the software world might
ask a seemingly legitimate question once all
modules have been unit tested: "If they all work
individually, why do you doubt that they'll work
when we put them together?" The problem, or
course, is putting them together -- interfacing.
Data can be lost across an interface; one module can have an inadvertent, adverse
affect on another; sub-functions, when combined, may not produce the desired
major function; individually acceptable imprecision may be magnified to
unacceptable levels; global data structures can present problems. Sadly, the list
goes on and on.
Top down Integration
Top down integration testing is an incremental approach to construction or
program structure. Modules are integrated by moving downward through the
control hierarchy beginning with the main control module (main program).
Modules subordinate (and ultimately subordinate) to the main control module are
incorporated into the structure in either a depth-first or breadth-first manner.
The integration process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs are substituted for
all components directly sub-ordinate to the main control module.
2. Depending on the integration approach selected ( i.e., depth or breadth first),
sub-ordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
5. Regression testing may be conducted to ensure that new errors have not been
Top-down strategy sounds relatively uncomplicated, but in practice, logistical
problems can arise. The most common of these problems occurs when processing
at low levels in the hierarchy is required to adequately test upper levels. Stubs
replace low-level modules at the beginning of top-down testing; therefore, no
significant data can now upward in the program structure. The tester is left will
Delay many tests until stubs are replaced with actual modules.
Develop stubs that perform limited functions that simulate the actual
Integrate the software from the bottom of the hierarchy upward.