Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Quality Attribute: Testability
1. SSW-565A
_________________________________________________________________________
Final Paper
Pranay Singh
Pledge: I Pledge my honor that I have abided by the Stevens Honor
System.
Executive Summary
This paper elaborates on architectural tactics and their applicability to achieve the
quality of testability in a software system. This paper also encompasses associated
quantitative measurements(metrics) that can be applied to asses the quality of
design for testability at both architectural and components level.
Brief Overview of the software system on which I aim to achieve testability:
The primary objective of the Ration shop web app is to serve the rural population.
Core features of the application:
● Folks can log in/sign up as a customer or as a shopkeeper.
● My application interacts with the google maps API to list the nearby ration
shops.
● The shopkeeper can update the pricing and availability of grocery items daily.
2. Table of Content
1. Executive Summary ….……………….……………………………….1
2. Table of Contents…………………………………………………….2
3. Introduction to Quality Attribute of Testability…………..…………...3
4. Analysis of Tactics….……………………………………..………....4-8
5. Applicability of Tactics………………………………….…………..9-12
6. Conclusion and References…………………………...…………12-13
3. Introduction: Testability
According to ISO standard 25010:2011, Software Testability can be defined as –
“Degree of effectiveness and efficiency with which test criteria can be established for
a system, product or component and tests can be performed to determine whether
those criteria have been met.”
According to Beizer(1983), “The act of designing tests is one of the most effective
error prevention mechanisms known. The thought process that must take place to
create useful tests can discover and eliminate problems at every stage of
development”[1]. This description of the testability conveys quite a lot about how we
achieve the quality of testability in modern-day software systems.
Specifically, knowing that there are faults in our software but cannot demonstrate
that faults through testing, calls us to use different architectural tactics to achieve
testability in our software system. It is obvious we want to deliver the software to the
customer with minimal bugs in it, hence we need to find those faults in the testing
phase and address those as quickly as possible. Whenever an increment is made in
a particular software, utilizing some of the architectural tactics eases out the process
of finding faults and bugs. Another factor which is integral to achieving the quality of
testability through architectural tactics is its cost metrics. A large part of the project
budget is allocated for testing any software therefore software architects should be
able to minimize the high costs associated with testing with the help of some tactics.
To achieve the quality of testability we will now analyze major architectural and
design decisions that help us to ensure the reliability of our software system and
keep our budget in check. We will also discuss how applying these tactics through
empirical means results in the generation of execution tests which helps testers to
interpret and modify those results.
4. Analysis of Architectural Tactics
Kazman(2013)[2] describes a
general scenario to achieve
testability that consists of Source,
Stimulus, Artifacts, Environment,
Response, Response Measure. In
other words, the source of stimulus
ranges from unit testers to acceptance testers. They provide a set of tests known as
a stimulus to the functionality which is being tested and defined as artifacts. The
environment can be deployment time or compile time for the test to happen. The
results are captured by controlling the system to perform those tests. And finally, the
response measure depicts the ease with which system testing discovered faults in
that particular functionality.
As per the textbook, tactics to achieve testability can be classified into two
categories:
Source: Kazman(2013)[2]
● First, in which we add controllability and observability to a software system.
○ Well defined interfaces: This architectural tactic is widely used by
software systems based on an object-oriented approach, where we use
setter and getter methods to capture the components/classes involved
5. and revert the state of an object to its original state using a reset
method.
○ Record/playback: In this tactic, we often record the state when it
passes through the interface so that we can capture the information
and use it as an input if the fault gets regenerated while doing further
tests.
○ Abstract data sources: In this tactic, to make testing easier we control
the input data instead of controlling the internal state. If the system
already has data inputs of a particular type then the architecture can be
designed in a way by which our test system points to other test
databases.
○ Sandbox: Isolating external dependencies are very integral in
designing and writing tests, therefore an instance/resource whose
behavior is outside the control of a system can be tested separately
without being concerned about returning the instance back to its
normal state.
○ Executable assertions: Assertions are hand-coded and are placed in
a program to indicate where the bug occurred while testing it.
Payne(1997)[5], emphasizes the use of software contracts when
designing the software to achieve testability. In software contracts,
behavioral semantics of a class and its methods are specified that
consist of an invariant, a precondition, and a postcondition. Suppose if
we call a method S with the help of class object O, then according to
the contract, there must be proper existing conditions to call S by the
client and the postcondition should provide how S should behave in
case the precondition is specified whilst invariant constrains state of O
within S. These mechanisms also justify the tactic of controllability and
observability mentioned in Beizer(1983)[1] and thus helps in achieving
testability because S(method) itself can be the only transformation for a
given input into a specific output. Assertions can be injected into the
source code while implementing the class with the help of these
contracts.
6. ● Second in which we limit the complexity of the software system’s design.
○ Limit Structural complexity: In this tactic, we isolate and encapsulate
external dependencies between program modules. For instance, while
developing a software system through an object-oriented approach we
need to dodge cyclic dependencies between components.
Microservices architecture achieves the highest level of testability
because microservices are separately deployed single-purpose
services and those changes generally isolated to one or two
microservices and perhaps, any of the structural changes to data are
self-contained within that bounded context only influencing that
particular service.
○ Limit nondeterminism: The internal features of object-oriented
programming that impacts testability and are linked with the behavioral
complexity of the system are:
■ Inheritance: We need to inhibit the dependencies between
modules/classes to achieve testability.
■ Polymorphism: It helps us to generate more test cases.
We are cognizant that most of the big and complex software systems follow the
object-oriented approach, thus DFT(Design for Testability) becomes an integral
part of such software systems. As per Joshi(2014)[6], DFT can be classified into
two categories:
1. Design Time: During the analysis and design phase of an object-oriented
software system, maximum outcome through testability analysis can be
produced because we examine the system's architecture through various
diagrams like UML deployment diagram(process view), UML class
diagram(logical view) and therefore on the basis of feedback we can improve
system design and architecture before it enters into the implementation
phase. Design Time DFT ensures the evaluation of testability in further
phases becomes more reliable.
7. Ma(2017)[4], describes how the code visibility and fault detection rate are
considered as metrics to measure testability in object-oriented software
development. When we run a given set of tests, the portion of the program
that is executed upon running those tests is known as code coverage. In other
words, it is a ratio of executed statements to the total number of statements
in the tested program whereas visibility refers to the information available
outside of the current scope to the program elements. The fault detection rate
refers to the number of faults detected after running a test suite. So higher the
code coverage greater the testability of the program. Empirical study has also
shown that irrespective of visibility, developer written tests have the same
code coverage when compared to automated testing tools where low code
visibility leads to low code coverage. Therefore high traceability becomes a
key factor to achieve testability because it is necessary to test less visible
codes.
Freeman(2010)[3], describes how test-driven development can be used to
achieve testability in the object-oriented realm. This tactic is based on a
simple approach where we write the tests for our code before the actual code.
Firstly, we should be able to build an infrastructure by building, deploying, and
testing a walking skeleton and then write acceptance tests for the first feature.
And then we add end to end tests to expose the uncertainty sooner because it
covers the area of code where change is needed. The tests that are written to
describe the feature will keep on failing until the particular feature has been
implemented properly. However, If it fails one more time after passing, implies
that our existing code is broken. To achieve DST using TDD, software
systems require modular architecture with well-defined components. For
effective Test Driven Development, modularity cluster the components that
share similar features. To achieve modular architecture, the principle of high
cohesion and loose coupling is followed which ensures tests related to
components sharing similar traits are easier to maintain in a unit and these
units are adequately tested in reclusion. This is also known as isolateability
which is one of the factors to determine the testability of software
8. components. In practice we create packages, and classes in that package are
supposed to perform certain responsibilities. These classes work strongly
together without needing anything from the outside.
High Cohesion and Loose Coupling
2. Code Time: Design Time DFT will not give effective results in case of a highly
dynamic object-oriented software system so code Time DFT is a design
technique used during the coding phase. In this tactic, at the time of coding
certain mechanisms are deployed to reduce the chances of failure of the
system and thus raising the performance of testability.
Writing AUT(Automated Unit test) is also one of the ways to achieve DST(Design
for Testability), but the obstacle in this approach is that we need to segregate the
tested parts in the system with non tested parts, which becomes quite convoluted
process when we deal with highly dynamic and complex software systems.
Generally, when we develop a software, we make different classes to serve the
whole purpose of the system, whereas tests are written pertaining to only a specific
class to make sure the system is working properly since the classes in the
object-oriented realm are collaborative making the process quite expensive and
intricate, therefore we need certain mechanisms to facilitate AUT’s. In other words,
we need to maximize the separation of concerns which is one of the key factors
that impact the testability of software components. The classes should be isolated
and surrounded by test classes, which will authorize the corroboration of all the
interactions carried out by the classes under AUT’s. This tactic is also known as
BIST(Built-In Self Testing). BIST provides better controllability and observability by
increasing the test capability and reducing the complexity of the software system. It
9. also limits the dependence on an external testing entity eventually cutting down the
cost metrics associated with testing.
Applicability of Tactics
This section elaborates about the implicit and the explicit implementation of the
above-listed tactics:
● Abstract Data Sources: Since my ration shop web application database can
be implied as an external dependency which might complicate a test scenario,
therefore local test files with prefilled data such as JSON, CSV, or
memory-based database such as SQLite DB could have been used instead of
a real database in order to control/isolate data source dependencies.
Pillai(2017)[7]
● Sandboxing: This tactic could have been applied using the following
technique Pillai(2017)[7]:
○ Resource Virtualization: We virtualize the resources that are outside
the system to control their behavior, that is, architecting a version that
mirrors their API, but not the internal implementation. Mocking is one
such technique to achieve resource virtualization:
■ Mocks: Mock objects are created to capture and observe the
state of the object while testing. This technique involves
replication of objects to test the behavior of other real objects
and since both use the same interface, the client object is
unaware of whether it’s real or mock. This is useful when a real
object is impractical to consolidate into a unit test.
Freeman(2010)[3]
Mocking Framework
10. ● Limit Non-determinism:
○ To increase the overall testability of the system, I could have made a
testability assessment model to track the object-oriented
programming features which affect testability at the internal level by
following below metrics:
■ NOC(Depth of Inheritance): Enumerate the number of children
of the superclass to check for dependencies.
■ NMO: Enumerate the number of overridden methods.
● Inserting Assertions: With the help of design by contract technique, I could
have integrated assertion into my web application’s program code, and having
such explicit checks in my code should greatly increase the chances of
quickly finding the bug and fixing it. An infection in program states of my web
application will propagate into failure, therefore if I inject assertions into my
source code that check a particular area every time a method is invoked then
I wouldn’t see the failure only at the last moment but would see the failure as
soon as the infection fails the assertion.
● Isolatebility: Yes, I could have applied this tactic implicitly.
○ Java Server Pages(JSP) powered the backend logic of my web
application, therefore packages incorporated classes with similar traits.
○ Principles of high cohesion and loose coupling can be assessed using
Testability Evaluation Metrics:
■ CBO(Coupling between Object): Enumerate the number of
classes coupled to each other directly.
■ LCOM(Cohesion Metric): Evaluate interaction between classes
and their attributes.
○ These principles directly influence external software quality factors
such as controllability, complexity, understandability, traceability, that
could have helped in achieving testability in my ration shop web
application. Suri(2015)[8]
● Specialized interfaces: In order to achieve DST in my web application, I
could have used the technique of Scenario Modelling to build a modular
11. architecture. Thus, a modular system could have allowed me to test each
component/module independently from the rest of the system by plugging it
into a test harness. Scenario models are sets of sequence charts that
demonstrate the interaction between components in reply to a specific
stimulus.
● Limit Structural complexity: I could have used the Response for
Class(RFC) metric to reduce class complexity. RFC metric could have
allowed me to keep track of methods of a particular class A, plus the methods
on other classes called by the methods of class A. Pillai(2017)[7]
● Traceability: Yes, I implicitly achieved traceability by using the following
access level modifiers that had a major impact on code visibility.
○ Public: Methods, classes, and variables declared as public can be
accessed from anywhere.
○ Private: Methods declared as private are only accessible within the
class.
○ Protected: Members declared as protected can only be accessed by
classes of the same package.
○ In order to achieve testability by testing less visible codes, automated
tools like Evosuite could have been used to measure the code
coverage. Ma(2017)[4]
Code visibility control based on access modifiers
12. The tactics which are not likely applicable in my web application are:
● Built-In Testing: Firstly, this tactic is widely used to achieve testability in
digital circuit systems and secondly it requires high coupling to increase the
amount of built-in tests in any software system. Since my web application is
based on the principle of loose coupling hence it becomes difficult to use
built-in tests in my ration shop web application.
● Record/Playback: My web application wasn’t that intricate to apply this tactic
in which I have to equip my system with some kind of an execution or data
trail that can figure out what is going on within a particular module and switch
it on and off to record the state of that module.
Conclusion
To conclude, testability is the ability of a software system to control and verify things.
We deduced that qualities that make a product such as my ration shop web
application more testable are desirable for customers too. We can achieve the
quality of testability in a software system with external factors such as controllability,
good observability, adequate complexity, high traceability, low coupling, high
cohesion. Testability doesn’t simply depend on more tests and better frameworks,
rather it relies upon the strategies we use at the architectural level to make testing
easier.
References
[1] B. Be&r, Software Testing Techniques. New York, Van Noshand, 1983.
[2] Len Bass, Paul Clements and Rick Kazman, Software Architecture in Practice, 3rd
Edition. Addison-Wesley, 2013.
[3] Steve Freeman, Nat Pryce, Growing Object-Oriented Software, Guided by Tests,
Addison-Wesley, 2010.
[4] Ma, L., Zhang, C., Yu, B. et al. An empirical study on the effects of code
visibility on program testability. Software Qual J 25, 951–978 (2017).
[5] Payne, J. E., Alexander, R. T., & Hutchinson, C. D. (1997). Design-for-testability for
13. object-oriented software. Object Magazine, 7(5), 34–43.
[6] M. Joshi and N. Sardana, "Design and code time testability analysis for
object-oriented systems," 2014 International Conference on Computing for
Sustainable Global Development (INDIACom), New Delhi, 2014, pp. 590-592.
[7] Anand Balachandran Pillai, Software Architecture with Python, Packt Publishing,
2017.
[8] Suri, Pushpa & Ratnani, Harsha. (2015). Object Oriented Software Testability
(OOST) Metrics Analysis. International Journal of Computer Applications
Technology and Research. 4. 359-367. 10.7753/IJCATR0405.1006.