SlideShare a Scribd company logo
1 of 93
Download to read offline
1
Evaluation and Comparison of Static Test Coverage Analysis and Dynamic Test
Coverage Execution in API Testing
Abul Aala Almas Bari (13250849)
Industrial Thesis 2015
M.Sc. Computer Science (Software Engineering)
Department of Computer Science
National University of Ireland, Maynooth
County Kildare, Ireland
A dissertation submitted in partial fulfillment
Of the requirements for the
M.Sc. in Computer Science (Software Engineering)
Head of Department: Dr Adam Winstanley
Supervisor: Dr. Stephen Brown
January 2015
Word Count: 26309
2
Declaration
I hereby certify that this material, which I now submit for assessment on the program of study
leading to the award of Master of Science in computer science (Software engineering), is
completely my own work and has not been taken from the work of others. The extent such
work has been cited and acknowledged within the text of my work.
Signed: Abul Aala Almas Bari Date: 25th
Jan 2015
3
Acknowledgement
I am using this opportunity to express my gratitude to everyone who supported me throughout
the course of this M.Sc. dissertation. I am thankful to Mr. Dr. Stephen Brown, Department of
computer science, National University of Ireland, Maynooth for his aspiring guidance,
invaluably constructive criticism and friendly advice during the project work. I am sincerely
grateful to him for sharing his truthful and illuminating views on a number of issues related to
the project. I also thanks to him for his invaluable technical guidance, great innovative ideas
and overwhelming moral support during the course of the project.
Thanks
Abul Aala Almas Bari
(13250849)
Under the Guidance of
Mr. Dr. Stephen Brown
Email: sbrown@cs.nuim.ie
National University of Ireland, Maynooth
4
Abstract
With the popularity of service computing, Web Services are being used by companies and
organizations. Web Services APIs are developed using Service Oriented Architecture (SOA).
An Application Programming Interface (API) is a set of services, used by programmers to
interact with other software. The difference between APIs and Web Services is that latter
facilitates interaction between two different computers but former acts as an interface between
both computers. APIs of Web Services can be tested by making an API call and observing
expected output along with response time. Along with this API testing also need to have some
useful metrics, which can determine the proper functionality of the API. This project is
inspired from an industrial problem: how best to measure the test coverage of API and core
functionality provided by a Web Services. There are two ways of measuring test coverage, one
is static test coverage analysis and another is dynamic test coverage execution. Both techniques
can generate a test coverage report, which helps to observe gaps (areas which are not tested) in
testing API. This paper develops a model of API testing, which is used to derive metrics. Both
test coverage techniques are compared on the basis of metrics derived from model of API
testing. This helps to determine which test coverage approach generates a better result. This
paper concludes with advantages or/and disadvantage of static test coverage analysis over
dynamic test coverage execution. It also presents results generated using both techniques. The
conclusion of this paper is that a combination of static test coverage analysis and dynamic test
coverage execution is most advantageous.
5
Table of Contents
Title Page ................................................................................................................................. 1
Declaration.................................................................................................................................2
Acknowledgement .....................................................................................................................3
Abstract......................................................................................................................................4
1. Introduction............................................................................................................................9
1.1 Problem Statement.........................................................................................................10
1.2 Motivation......................................................................................................................10
1.3 Aims and Objectives......................................................................................................11
1.4 Approach........................................................................................................................11
1.5 Report Structure.............................................................................................................12
2. Background of Industrial Work ...........................................................................................14
3. Related Work .......................................................................................................................19
3.1 Software Quality Measurement .....................................................................................19
3.2 Introduction to WSAPI ..................................................................................................20
3.3 API Testing....................................................................................................................20
3.4 Test Coverage ................................................................................................................21
3.4.1 Static Test Coverage Analysis ................................................................................21
3.4.2 Dynamic Test Coverage Execution ........................................................................23
3.5 Combined Study.............................................................................................................23
4. Tools and Techniques Used.................................................................................................25
4.1 Static Tool......................................................................................................................26
4.1.1 Sigtest User Guide ..................................................................................................26
6
Table of Contents
4.1.1.1 Generate the signature file ...............................................................................26
4.1.1.2 Generate the coverage report ...........................................................................28
4.1.2 Apache Ant .............................................................................................................30
4.1.2.1 How to write a Simple Build file.....................................................................30
4.1.3 Spec Sharp ..............................................................................................................30
4.2 Dynamic Tool ................................................................................................................32
4.2.1 Eclemma .................................................................................................................32
4.3 Testing Tool...................................................................................................................35
4.4 Unified Modeling Language Tool .................................................................................35
5. Project Overview .................................................................................................................37
5.1 Representative Code Structure.......................................................................................37
5.2 Use Case Diagram..........................................................................................................39
5.3 Class Overview/Diagram...............................................................................................40
6. Test Experiments and Results..............................................................................................45
6.1 Model of API Testing ....................................................................................................45
6.1.1 Model of GUI Testing.............................................................................................45
6.1.2 Model of API Testing .............................................................................................46
6.2 Experiments on Metrics Derived From the Model of API Testing ...............................46
6.2.1 Method/Class/Package Test Coverage....................................................................47
6.2.1.1 Experiment 1....................................................................................................47
6.2.1.2 Result 1 ............................................................................................................48
6.2.2 Interface and Their Abstract Members Test Coverage...........................................48
6.2.2.1 Experiment 2....................................................................................................49
6.2.2.2 Result 2 ............................................................................................................49
6.2.3 Callbacks Test Coverage.........................................................................................50
6.2.3.1 Experiment 3....................................................................................................50
6.2.3.2 Result 3 ............................................................................................................51
6.2.4 Data Type Conversion Test Coverage ....................................................................51
7
Table of Contents
6.2.4.1 Experiment 4....................................................................................................52
6.2.4.2 Result 4 ............................................................................................................53
6.2.5 Inherited Class And Method Test Coverage...........................................................54
6.2.5.1 Experiment 5....................................................................................................54
6.2.5.2 Result 5 ............................................................................................................54
6.2.6 Direct and Indirect Classes/Method Test Coverage................................................56
6.2.6.1 Experiment 6....................................................................................................56
6.2.6.2 Result 6 ............................................................................................................56
6.3 Experiments on Additional Metrics Covered By Static Test Coverage Analysis .........57
6.3.1 Default Constructor Test Coverage ........................................................................57
6.3.1.1 Experiment 7....................................................................................................57
6.3.1.2 Result 7 ............................................................................................................58
6.3.2 Attributes Test Coverage ........................................................................................59
6.3.2.1 Experiment 8....................................................................................................59
6.3.2.2 Result 8 ............................................................................................................59
6.3.3 Read-Only Methods Removal from Test Coverage Report....................................60
6.3.3.1 Experiment 9....................................................................................................60
6.3.3.1 Result 9 ............................................................................................................63
6.4 Results Summary ...........................................................................................................64
7. Evaluation ............................................................................................................................65
7.1 Metrics Derived from Model of API testing..................................................................65
7.1.1 Method, Class, and package coverage ....................................................................65
7.1.2 Interface and their abstract members test coverage ................................................66
7.1.3 Callbacks test coverage...........................................................................................67
7.1.4 Data type conversion test coverage.........................................................................68
7.1.5 Inherited Class/Method test coverage.....................................................................70
7.1.6 Direct and indirect classes/method test coverage ...................................................72
7.2. Extra metrics covered by Static test coverage analysis ................................................73
7.2.1 Default Constructor test coverage...........................................................................73
8
Table of Contents
7.2.2 Attributes test coverage...........................................................................................74
7.2.3 Read Only Methods ................................................................................................75
8. Conclusion and Future Work...............................................................................................76
8.1 Extra Metrics covered by Static test coverage analysis.................................................76
8.2 Limitations and Advantage of Static Test Coverage Analysis ......................................77
8.2.1 Major Source of Error.............................................................................................77
8.2.2 Advantages of Static Coverage Analysis................................................................78
8.3 My Contribution.............................................................................................................78
8.4 Future Work...................................................................................................................79
8.4.1 Extend the Model of API Testing...........................................................................79
8.4.2 Develop a Tool which combine both Results .........................................................79
References................................................................................................................................80
Appendix A: Terminology.......................................................................................................83
Appendix B: Use of Sigtest......................................................................................................84
Appendix C: Use of Apache Ant .............................................................................................89
Appendix D: Use of TestNG framework.................................................................................91
Appendix E: High Level Class Diagram .................................................................................93
9
1. Introduction
Software quality has three main aspects: functional quality (the tasks which software is
intended to do), structural quality (means code is well structured), and last but not the least
process quality (means process involved in software development). Measurement has always
been fundamental to the progress of software testing. Software testing is one of the approaches
to measure software quality. Software testing involves dynamic verification [39] of the
behavior of program on a finite set of test cases based on specification (expected output on
given input) or expert opinion. Software testing also plays an essential role in software
maintenance life cycle [6]. It is often practiced by industries to determine and improve quality
of the software. Software testing can also produce a test coverage report. Test coverage is the
extent that a structure has been exercised as a percentage of the item being tested or covered.
Test coverage helps in evaluating effectiveness of testing by providing data on different
coverage items [6]. Test coverage also helps to show areas (gaps in software testing) where it
needs further testing. There are two approaches for measuring test coverage, namely, static test
coverage and dynamic test coverage. This paper discusses the major difference in the process
of generating test coverage result using both methodologies. For example how interface and
their abstract member functions behaved using both methodologies? How both test coverage
methodologies behave in case of inherited members? How the default constructor does get
invoked in test generation report for static members of the class. The below section explains a
bit of how both approach works.
1. A dynamic test coverage tool, like eclemma [25, 26] is used with an executed program
(over some inputs), to observe the executed code and analyze the results. It is used to
measure test coverage when testing the Correctness, Performance (optimization),
finding runtime bugs and testing the quality of the software [35]. Dynamic execution is
based on a concrete execution: it can be very slow if exhaustive, precise as it has no
approximation, and needs no assumptions, and unsound1
as it cannot be generalized.
Dynamic test coverage execution can also automatically search for assertion violations,
divergences, and live locks [34]. The process of using dynamic tool to generate the test
coverage report dynamically is referred as dynamic test coverage execution in the
project.
2. Static code analysis is the analysis of code without actually executing the code [33]. A
static tool scans the class files to determine which would be executed and hence marked
them at covered. It is conservative and sound but it makes assumptions. It works on the
declaration2
of a member function rather than method definition3
. The static analysis
shows "reference tree" rather than a "call tree" i.e. methods that could be called, but not
ones that necessarily are. The process of using static tool to generate the test coverage
report statically is referred as static test coverage analysis in the project.
1
Unsound means something which is not based reliable evidence or reasoning.
2
Method declaration means declaring the fully qualified method name for e.g. int stringLenth(string a);
3
Method definition means defining the actual function of the method for e.g.
int stringLenth(string a){
return a.length(); }
10
1.1 Problem Statement
Can static test coverage analysis give extra test coverage results as compared to dynamic test
coverage? For e.g. can static test coverage analysis provide test coverage measurement of the
invocation routes for indirect method calls?
1.2 Motivation
In the past years, Services-Oriented Architecture (SOA) [15] has become a widely adopted
paradigm to create loosely coupled distributed systems. One of the characteristics of SOA is
that Application Programming Interface (API) of the available Web Services is published to
serve customers using standard machine-readable description languages. Web Services
Description Language [1] (WSDL) helps in achieving this for Web Services. These API needs
to be tested in order to ensure proper functionality. While testing, usually two types of
measurement are considered: requirements test coverage measurement (Black-Box testing) and
structural test coverage measurement (White-Box testing). Testing is used to produce test
coverage reports, which can be used to determine the area which needs further testing. Some of
the basic test coverage metrics like branch coverage or statement coverage require access to the
source code, which is not always possible for Web Services. This is why Web Services testing
approaches (test cases) usually rely on the API definition, and hence the focus is on black-box
testing [16].
The motivation always includes a mix of business and technical. This section discusses both
the technical and business motivation briefly. The business motivation of doing this project
arises from the problem faced by my team during my internship. My team members in Dublin
were writing test cases in java for testing Web Services Application Programming Interface
(WS-API). The detail discussion of industrial background is discussed in chapter 2:
Background of Industrial Work. When test cases are written for thousands of files, then it is
hard to remember which one has been tested and which one is remaining. Test coverage report
helps in achieving this. Test coverage report allows determining what has been tested and what
is left. On the basis of coverage report one can prioritize the work in those areas which is more
important to test. This was the primary question to determine how much of the API and core
functionality have been tested and which area needs further testing. This leads to the business
motivation for my project.
The technical background was based on the metrics derived from the model of API testing.
These metrics are used as a base for the comparison between both static test coverage analysis
and dynamic test coverage execution. The model of API testing is developed and discussed in
chapter 6. Test Experiments and Results. The metrics is derived from the model of API
testing in the same chapter, in the section following the model of API testing.
The motivation of this project is to evaluate the test coverage report using static test coverage
analysis and dynamic test coverage execution after testing the API. The effectiveness is
defined by measuring the test coverage report of the all the metrics derived from the model of
API testing. The motivation also aims to identify and determine the similarity, difference,
advantage and disadvantage of static test coverage over dynamic test coverage and vice versa.
It also determines the extra test coverage metrics generated using static test coverage analysis.
11
1.3 Aims and Objectives
As shown in figure 1, it is very easy to understand that here it needs to measure the branch
coverage, statement coverage etc. But the situation is not same in case of measuring API test
coverage. An API is an important form of software reuse and it has been widely used. A model
of API testing is required to derive the metrics, which can be used as a base for the comparison
between static test coverage analysis and the dynamic test coverage execution.
Figure 1: Example Code
The thesis contains the following objectives to answer the research question.
1. Develop a model of API testing and then derive the metrics for test coverage from the
model of API testing.
2. Determine the test coverage report of API using both static and dynamic test coverage.
3. Identify and determine the differences and similarity between the two test coverage.
4. Identify and determine the advantage or/and disadvantage of using static approach over
dynamic approach.
5. Identify and determine the test coverage measure of Invocation route of methods which
is not measured directly by the dynamic tool.
1.4 Approach
This thesis is based on an industrial problem. Hence I need to designed classes according to the
work I was doing in my organization during my internship. The proposal of this project is to
evaluate the static and dynamic test coverage of API. In order to answer the research question,
following steps were proposed:
● Select representive java files
Figure 16 in chapter 5. Project Overview shows the java files used for measuring API
test coverage. Those important functional-related java files were selected (such as
ImplementationType.java, Login.java Server.java etc,) to perform the experiments in
order to obtain useful test coverage result and its evaluation. The core functionality is
written by OvmWsClient.java file. These java files were implemented as same as the
work of industry. The reason for making representative code is that company doesn’t
12
allow giving their own code for thesis work. So those files were written which I was
working at the start of thesis work.
● API testing using black box testing
Black box testing is sometimes also referred to as functional testing or specification
based testing [28]. The test cases were selected on the basis of specification given and
expert opinion. Some additional positive tests and negative test were included from the
manual tests. Three Black-Box testing approaches (like Equivalence Partitioning (EP),
Boundary Value Analysis (BVA), and Combinations of Inputs (CI) etc.) were used in
testing each method in each selected java files of API and core functionality. The main
objective was to test the API and core functionality. Negative test include tests which are
expected to fail.
● Identify and determine static and dynamic test coverage
After testing, the next task is to generate the test coverage report using the static test
coverage analysis and dynamic test coverage execution. It is based on all the metric
derived from the model of API testing. The main work is in this section. It determines the
behavior of test coverage result generated using both test coverage approaches using all
the metrics. For example, how abstract method test coverage is displayed in test coverage
report?
● Compare Static and Dynamic coverage analysis results for test coverage and code
coverage and draw conclusions
At the end, on the basis of test coverage result, static test coverage analysis is compared
with dynamic test coverage execution on the basis of following criteria to draw the
conclusion.
1. All metrics test coverage derived from model of API testing
2. Additional metric test coverage by static test coverage analysis.
3. Number of tests written and the time taken to execute for static and dynamic test
coverage techniques.
4. Advantage or/and Disadvantage of both test coverage techniques.
1.5 Report Structure
Further chapters of this dissertation are organized as:
Chapter 2: This chapter discusses the background of industrial work (on which the problem is
based).
Chapter 3: This chapter discusses the context of the problem in detail and previous work
done.
13
Chapter 4: This chapter describes the analysis of tools used in the testing and measuring the
test coverage.
Chapter 5: This chapter includes all the details on my project, package and classes which I
had written to generate the result (functionality and a description of classes) including use case
diagram and class diagram and its description.
Chapter 6: This chapter explains the experiments done and the result generated from the
experiments. It includes experiments and results of all the metrics derived using the API
model.
Chapter 7: This chapter presents the evaluation of my work done. It explains the reason for all
the result generated using both static test coverage analysis and dynamic test coverage
execution.
Chapter 8: This chapter discusses the conclusion and future work to be done.
References: It contains all the proper citations for all reference I have used in this project.
Appendix: It is divided into 4 parts as follows:
1. It discusses all the terminology.
2. It discusses the sigtest (static tool for test coverage generation) usage.
3. It discusses the apache ant usage.
4. It discusses the use of testing (testNG) framework.
14
2. Background of Industrial Work
This section explains the Business motivation for this project. It explains the background detail
of WSAPI testing and need for measuring the test coverage. The Oracle Virtual Machine’s
(OVM) WS-API provides a Web Services interface for building applications with Oracle VM.
SOAP, REST-XML and REST-JSON [1, 2, 3, 4, and 5] are the protocol specification they used
for exchanging structured information in implementing Web Services in computer networks.
These protocols offer the same functionality but the choice of use of the protocol depends on
the client implementation. A client program can use only one type of API interaction either
SOAP or REST-XML or REST-JSON depending on the day of the week. Mixing of both of
these protocols is not supported and may cause unexpected results. OVM [37] uses this WS-
API to communicate with the Oracle Virtual Machine Manager (OVMM) [37] with the client
code. There are three ways of communicating with the OVMM and client code using WS-API
as explained below. Figure 2 explains the same in pictorial form.
1. Command Line Interface (CLI): It is one of the approaches used by client to
communicate with the OVMM using WS-API. The CLI is accessible using an SSH
client which uses the WS-API [37].
2. Graphical User Interface (GUI or UI or Web UI): It is second approach used by
client to communicate with the OVMM using WS-API. The GUI is accessible using a
web-browser which uses the WS-API [37].
3. WS-API Java Tests: This is the last approach to communicate with the OVMM using
WS-API. The teams were involved in porting the python code into java code using a
testing framework known as testNG (testing Next Generation) [27]. The aim of writing
the test cases is to test the API and core functionality provided by WS-API and also to
communicate with the OVMM [37].
Figure 2: High level description of the Oracle Virtual Machine (Diagram adapted from
internal oracle document) [37]
15
The WS-API allows client to write applications to customize, control, and automate OVM. For
example, it allows to [37]:
1. Create, start, restart and stop virtual machines, and migrates them between different
OVM Servers.
2. Create Server pool of server(s) and other operations on server(s).
3. Expand the capacity of the OVM environment by adding more OVM Server(s) and
storage providers.
4. Integrate OVM with other third-party tools such as monitoring applications
OVM is a platform which provides a fully equipped environment with the latest technology
and benefits of virtualization. OVM offers clients to deploy OS (operating systems) and
application software in a virtualization environment. OVM insulates users and administrators
from the underlying virtualization technology and allows daily operations to be conducted
using goal-oriented GUI interfaces [37]. The main components shown in figure 3 are discussed
below in brief:
1. Client Applications - OVMM are provided either using the GUI (Web UI) accessible
using a web-browser, or CLI. Both are accessible using an SSH client that use the WS-
API. All communications with OVMM are secured using either a key or certificate
based technology.
2. Oracle VM Manager: It is used to manage OVM Servers, Virtual Machines (VM) etc.
It also allows managing infrastructure directly using the command line. Each of the
interfaces runs as a separate application to the OVMM core and interfaces with this
using the WS-API. It is also accessible using the Web UI. OVMM can be on a
standalone computer, or part of a Virtual Machine running on an instance of OVM
Server. All actions within OVMM are triggered by OVM Agent by using the WS-API.
3. Oracle VM Manager Database: It is used by OVMM core to store and track
configuration, status changes and events. MySQL Enterprise database is supported by
OVMM. The database is configured for exclusive use of OVMM and must not be used
by any other applications. It is automatically backed up by backup manager on a regular
schedule, and facilities are provided to perform manual backups as well.
4. Oracle VM Server: OVM Server is installed on a bare metal computer, and contains
the OVM Agent (ovs-agent) to manage communication with OVM Manager. It has 2
domain as follows:-
a. dom0 (domain zero) – It is management or control domain with privileged
access to hardware and device drivers.
b. DomU (User domain) – It is an unprivileged domain. It does not have direct
access to the hardware or device drivers. It is started and managed on an OVM
Server by dom0.
One or more OVM Servers are clustered together to create server pools. This allows
OVMM to handle load balancing and failover for high-availability (HA) environments.
Virtual Machines (VM) run within a server pool and can be easily moved between all
servers in server pool. Server pools also provide logical separation of OVM Servers and
Virtual Machines.
16
5. External Shared Storage: It provides storage for many purposes. It is also important
to enable high-availability (HA) options with the help of clustering. OVMM helps in
achieving discovery of storage and its management, which then interacts with OVM
Servers using storage connect framework. OVM server then interacts with storage
components. OVM provides support for a variety of external storage types including
NFS, iSCSI and Fibre Channel [37].
Figure 3: Product Architecture (Diagram adapted from oracle document) [37]
17
The product architecture shown in figure 3 allows the client code to perform the following
operations. And many other operations can be performed on each of them. This is a high level
description of it.
1. Discover Servers: OVM Server can be installed on either x86 or SPARC hardware
platforms. There is an OVM Agent, also knows and ovs-agent, is installed on each
OVM Server which is a daemon that runs within dom0 on each OVM Server instance.
Its primary role are :
- Facilitate communication between OVM Server and OVM Manager.
- Carry out the entire configuration changes required on an OVM Server instance,
in accordance with the messages that are sent to it by OVM Manager.
- OVM Agent is also responsible for starting and stopping virtual machines as
required by OVM Manager.
OVM Agent also maintains its own log files on the OVM Server that can be used for
debugging issues.
2. Discover Storage: Storage in Oracle VM can be provided using any of the following
technologies:
- Shared Network Attached Storage - NFS (Network File System).
- Fibre Channel SANs connected to one or more host bus adapters (HBAs).
- Local disks.
- Shared iSCSI sANs: abstracted LUNs or raw disks accessible over existing
network infrastructure.
3. Create Server Pool: A server pool is a required entity in OVM, even if it has only one
OVM Server. Usually one server pool has several OVM Servers, and an OVM
environment may contain one or more server pools. Server pools can be clustered and
non-clustered both, but typically they are clustered. In all version of OVM before 3.4,
there was a Master server, which was responsible for centralized communication with
the OVMM. If it is necessary, then any other server in server pool can take over the
Master role. But in OVM 3.4.1 or later, there is no concept of master server anymore. A
virtual machine (VM) can be moved from one OVM Server to another without an
interruption of service, because server pools have shared access to storage repositories.
4. Create Repository: A repository can also be created on a local storage disk, with the
exception that the data within such a repository would only be available to virtual
machines running on the OVM Server to which the disk is attached. After completion
of storage phase, the next task is to create a storage repository. When the storage
repository is created, OVM Manager can take ownership of it, and OVM Servers
selected during the creating process have access to the repository in order to store VM,
ISO files, templates etc.
5. Create Network: The network infrastructure Of OVM environment establishes
connections between the following:
- OVM Servers within the environment.
- OVM Server and storage subsystems that it uses.
18
- OVM Manager and all OVM Servers in the environment.
- Virtual Machines (VM) running in a server pool.
The entire network connections discussed above can use the features supported by
OVM to maximum advantage. The feature includes NFS (networked file systems),
clustering, redundancy and load balancing, bridging, and support for Virtual LANs
(VLANs).
6. Create Virtual Machine (VM): The entire Oracle VM environment is designed for the
purpose of running and deploying virtual machines. They are, therefore, a significant
component within the Oracle VM architecture. Virtual machines offer different levels
of support for virtualization at the level of the operating system, often affecting
performance and the availability of particular functionalities afforded by system
drivers. VM can be created from different types of resources like a template, an
assembly (which contains preconfigured VM), or from scratch using an ISO file
(image). The creation of a VM using a template uses cloning technique: the template is
imported (as an archive in zip format), unpacked and then stored as a VM configuration
file with images of its disks. This is now cloned to create a new instance in the form of
a VM. Similarly, an existing VM can be cloned to create a new VM, or a new template
(which can be further used to create a VM).
19
3. Related Work
This section discusses the background details related to Web Services Application
Programming Interface (WSAPI), software quality, API testing and test coverage. It discusses
two approaches of measuring the test coverage report of API and core functionality provided
by WSAPI. Those test coverage approaches are static test coverage analysis and dynamic test
coverage execution. It also discusses how both test coverage approaches can be combined
together.
3.1 Software Quality Measurement
Today whole world runs on software, every single business depend on the use of software.
Keeping this in mind, in the world of competition, quality of software really matters. Every
company makes sure that their clients are using the better quality of product. Software quality
measurement has always been an important step in the progress of software development.
Software metrics is very important in making quantitative or qualitative decisions in reducing
the risk assessment in software. Addison Wesley has defined the term software quality as:
The degree of conformance to explicitly stated functional and performance requirements,
explicitly documented development standards, and implicit characteristics that are expected of
all professionally developed software. [10]
Software quality measurement is a process of collecting and analyzing data for software.
Characteristics of a good software quality [8] are:
● Reliability - It is the ability of a system to perform its required functions under
stated conditions for a fixed amount of time.
● Sensitivity - It is the variability in responses when it exists in the stimulus or
situation.
● Validity - It measures what it intent to measure.
There are three aspects of software quality [38] as explained below:-
1. First aspect is functional quality. It means that the software performs the tasks efficiently, it
is instructed to do.
2. The second aspect is structural quality, which means that the code should be well
structured. Structural quality is generally hard to test.
3. The third aspect is process quality, and it is also critically important. It is the process
involved in developing the software.
All three aspect of the software is equally important and should be tested or implemented
properly to improve quality and get a better product.
20
3.2 Introduction to WSAPI
Web Services are client and server applications that communicate over the World Wide Web’s
(WWW) Hypertext Transfer Protocol (HTTP). According to World Wide Web Consortium [5,
9] standards, Web Services provides a standard means of inter-operating between software
applications running on a variety of platforms and frameworks. Web Services can be combined
in a loosely coupled way to achieve complex operations. There are two types of Web Services,
restful and restless. RESTful are those which uses REST (Representational State Transfer) on
the other hand RESTless are those which use SOAP (Simple Object Access Protocol). Today
many applications uses web application, for example, in Gmail [4] the JavaScript client
program invokes the web APIs to get the mail data. WSAPI has become an important tool for
web developers and also becoming an effective marketing tool for many types of businesses.
Some of the most popular and widely used Web Services API and its increase in the use over
time are shown in figure 4 and figure 5 respectively [12]. Model-based techniques have been
used to build a test model to get a better quality service and reliability. These test models are
used to derive the test cases for testing API and core functionality provided by Web Services.
Tester can easily update these models and can re-generate test suite for any changed
requirements, avoiding error-prone manual changes [5].
Figure 4 - Popular Web API Categories Figure 5 - APIs Usage over Time
3.3 API Testing
A key to testing software is ensuring their functional quality, because when a set of services is
combined together, it may introduce many more opportunities of error or failure. Software
testing is an essential activity in measuring software quality and its maintenance. Software
testing also involves the generation of test coverage report. These reports contain information
of what has been tested and what has not tested. For a bigger software project, it’s hard to
remember what functionality has been tested or what is left. Hence the test coverage report
helps testers to observe the area which requires further testing and to prioritize their testing
work in those areas which are more important to test first.
21
API testing is different from other software testing. They differ in metrics to be tested.
Generally software testing is used to observe some of the common metrics [28] like branch
coverage, statement coverage, MC/DC coverage, DU pair coverage etc. One the other hand
API testing is more of testing the application end to end, service by service, and interface by
interface. An API testing can be compared with the GUI testing. The difference between the
two is that the GUI has a user-friendly collection of windows, dialog boxes, buttons, and
menus. On the other side the API consists of a set of direct software links, or calls, to lower-
level functions and operations [13, 14].
Developers test API by writing kernel level [40] unit tests. Unit testing provides the test
coverage by checking the outputs of API on different combination of inputs. On the other hand
functional or system testing [40] invokes API call indirectly by running tests cases with
customers input (how customer use the system).
3.4 Test Coverage
Testing can also produce test coverage report. Test coverage report helps in evaluating the
effectiveness of testing by providing data on different coverage items. According to the
International Software Testing Qualification Board (ISTQB) [7]:
“Test coverage is the extent that a structure has been exercised as a percentage of the
items being covered. If test coverage is not 100%, then more tests may be designed to test
those items that were missed and therefore, increase test coverage”.
Test coverage can help in monitoring the quality of testing. It also assists in directing the test
case generators to generate test cases for those areas which need further testing. They are also
known as the gaps in the software testing, i.e. the area which has not be tested. It also provides
the user with information on the status of the verification process [21]. Test coverage also
helps in regression testing, test case prioritization, test suite augmentation and test suite
minimization.
There are two ways of evaluating the test coverage of the software. First one is static test
coverage analysis and the second one is dynamic test coverage execution. There are many
dynamic test coverage tools (like jococo, eclemma, clover, Emma etc.). The following section
briefly reviews some facts about static test coverage analysis and dynamic test coverage
execution [29]. Static analysis and dynamic execution arose from different communities and
evolved along parallel but separate tracks. Traditionally, they have been viewed as separate
domains, but sometimes can be combines together to generate better result. Both of the test
coverage approaches are discussed briefly in the following section:
3.4.1 Static Test Coverage Analysis
Static test coverage analysis analyzes the given code and reasons about all possible behaviors
that might arise at run time. Static test coverage analysis can also be used to prove the absence
of run-time errors using a verification tool like Polyspace verifier [36]. Verification is a
technique that may be used any time. It doesn’t require any prior knowledge of the code to be
analyzed. Compiler optimizations are standard static analyses. According to the MIT Lab for
computer science, static analysis is conservative and sound [29, 31]:
22
Soundness guarantees that analysis results are an accurate description of the
program’s behavior, no matter on what inputs or in what environment the program is
run. Conservatism means reporting weaker properties than may actually be true; the
weak properties are guaranteed to be true, preserving soundness, but may not be strong
enough to be useful.
A sound system is one that proves only true sentences, i.e., iff provable (p) => true (p) [33]. It
means a statement is true if and only if it proved to be true. Static test coverage analysis means
obtaining the test coverage without actually executing the piece of code [33]. Tools like java
and Spec# [18, 19] have been very useful for small and simple programs to detect runtime
errors without actually running the program. They have proved the importance of using an
intermediate language (for e.g. Boogie) [17, 18, 19] for static verification of code. Static test
coverage analysis can also be used to prove program’s incorrect segment i.e., to find bugs in
the program.
One approach for estimating test coverage analysis using static analysis is explained in the
following section. This work was carried out by Tiago L. Alves and Joost Visser, Software
Development Group, Netherland [30]. They introduced a technique of slicing of static call
graphs of method, class and package to estimate the test coverage and validate the results
obtained using dynamic tool. The steps are described below [30]:-
1. Graph construction: A graph is derived to represents packages, classes, interfaces, and
methods, and the relations between them. A graph is usually represented by a mathematical
formula stated below [30]:
G = (V, E)
Where,
G = Graph to be drawn
V = Set of vertices (nodes in graph)
E = Set of edges between these vertices (relationship between them)
The packages (P), classes (C), interfaces (I), and methods (M) makes the vertices (V) of the
graph. Therefore, set of vertices (V) is represented as Nn ϵ V where n ϵ {P, C, I, M}. Edged of
the graph are formed by the following members:
a. DT – To define class or an interface
b. DM – It defines the methods.
c. DC – It states direct call of class or method
d. VC – It defines a virtual call made between
Hence, the set of edge is expressed mathematically as E ⊂ {(u, v) e | u, v ϵ V}, where e ϵ {DT,
DM, DC, DV}. All these terms stands for the following [30]:
2. Count methods per class: Based on the above definitions following metrics is measured:
a. Number of covered methods per class (CM) is calculated by counting the numbers of
outgoing define method edges where the target is a method contained in the set of
covered methods.
b. Number of defined method per class (DM) is calculated by counting the numbers of
outgoing define method edges per class.
23
3. Estimate static test coverage: Now using the above two metric, one can calculate some of
the basic metric like
a. Class coverage: It is the ratio between of covered members and defined methods. It can be
mathematically expressed as [30]:
Class Coverage (in %) = (∑ Covered Member/ ∑Defined Member) × 100
b. Package coverage: It is the ratio between the total number of covered and defined in all the
class of the package. Mathematical expression is [30]:
Package Coverage (in %) = (∑ Covered Member / ∑ Defined Class) × 100
The result obtained from static estimation answers the question that “can test coverage be
determined without actually running tests?” It was observed, that static test coverage not only
used as predictor for test coverage, but it also detects coverage fluctuations.
3.4.2 Dynamic Test Coverage Execution
Dynamic test coverage execution operates by executing the given piece of code and observing
the behavior [31, 35]. There are dynamic test coverage tools (like jococo, eclemma, clover,
Emma etc.) [25, 26] which generate the test coverage report. Theoretically and experimentally
dynamic analysis is precise because there is no approximation in the process of test coverage
report generation [31, 33]. Dynamic test coverage execution examines the actual, exact run-
time behavior. An analysis is “dynamic” if it emphasizes control-flow accuracy over data-flow
richness/generality [33] Dynamic analysis of program provides concrete reports of errors,
which makes it easier to debug [32].
Directed automated random testing (DART) [11, 32] has introduced the concept dynamic
verification with lightweight static analysis. The major challenge involved in the generation of
test coverage by dynamic execution is selecting a representative set of test cases (inputs to the
program being analyzed) [29]. Dynamic execution can be used even in situations where
program semantics (but not perfect program semantics) are required. As a result, and because
of its significant successes, dynamic execution is gaining credibility. The major drawbacks of
dynamic test coverage execution may not be generalized for future execution.
There are many tools which can generate the test coverage result using dynamic approach. In
this project I have used eclemma [25, 26]. It is an eclipse plug-in which is installed and can be
used to generate test coverage report by running the given piece of code.
3.5 Combined Study
Static test coverage analysis and dynamic can enhance each other. Both techniques can always
be applied to generate a better result than they can produce individually. Only testing on its
own (does not guarantee completeness) or verification on its own (may guarantee
completeness but not deal with implementation) will not be enough to guarantee software
quality. Runtime monitoring of software are mainly used for profiling, performance analysis,
24
software optimization as well as software fault-detection, diagnosis, and recovery [31]. Both
the approaches are studied individually, and then compared with each other and lastly can be
combined together in such a manner so as to obtain better and efficient result. Static analysis is
the one which verify a program for all possible execution paths before the actual execution.
Dynamic execution verifies the system’s properties by executing the program. Dynamic
execution can detect potential faults at runtime. It maps the abstract specification to the actual
concrete implementation. The soundness nature of static analysis can benefit the efficient and
precise dynamic test coverage execution process. Combining both approaches can help the
developers in identifying errors and test-case generation because static analysis provides the
structure of the program. Thus it becomes easier to produce test cases which cover most of the
program paths. Also both methodologies operate on program and specification. Hence, both
can be mixed together in a way shown in figure 6. It means that if we mix the program with the
specification and then perform static analysis of the code. After static analysis, the result
obtained can be used as an experience for the dynamic test coverage execution. It can be used
to generate test cases for dynamic test coverage execution.
Figure 6: Combine Use of Static Analysis and Dynamic Execution
All the topics discussed in this chapter is very useful and necessary to understand the technical
background and research work in the field of static test coverage analysis and dynamic test
coverage execution. It also is used for me to understand the basic principle on the both
approach operates. Concepts of API test and model of GUI testing helps to develop the model
of API testing. The paper on verification tool like spec sharp is used for program verification
and to investigate the runtime error without actually running them. Test coverage and both
approaches are used to generate the test coverage metrics and compared them. It also helps to
understand the concepts of testing and test coverage. This information can be useful in
performing API testing and then generating test coverage by static and dynamic techniques.
25
4. Tools and Techniques Used
This chapter discusses the description of all tools and techniques used in testing and test
coverage report generation. It contains 4 sub sections as follows.
1. First section discusses the tools and techniques used for generating test coverage result using
static tool.
2. Second section discusses about the tools and techniques used for generating test coverage
result using dynamic tool.
3. Third section discusses testing tool used for testing.
4. Fourth section discusses an eclipse [25, 26] plug-in to generate the class diagram.
Before discussing all tools used, it is very important to explain the relationship between all the
tools. Figure 7 shows the relation between all the tools used. This can be helpful to understand
which tool has been used and why they have been used?
Figure 7: Diagram showing the relation between tool and code
26
The diagram shown in figure 7 explains how all the tools and code are related to each other. It
starts with designing use-case and class diagram. Firstly source code (including the API and
core functionality) was written with using class diagram and use case diagram generated by the
Unified Modeling Language (UML) tool (Star UML). After code development, spec sharp [17,
18, 19 and 20] uses the source code for formal verification (to remove runtime error). This is
important before generating result using static test coverage technique. The reason is that codes
are not executed in static test coverage analysis and hence it is required to remove run time
error before test coverage report generation. Then test classes were written using testing tool.
Static and dynamic test coverage tools use test class to generate test coverage result. Static test
coverage tool uses apache ant to generate test coverage result. Succeeding section discuses the
tools used.
4.1 Static Tool
This section discuss tool used to generate test coverage result using static test coverage
technique. It is further divided into 3 sub sections as follows.
1. First section discusses the use of sigtest user guide to generate signature file (it is a file
which contains all the information about class and its members) and then use that signature file
to generate test coverage report.
2. Second section discusses the use of apache ant to generate test coverage report. Static tool
uses command line argument; hence apache ant is used to generate test coverage report by
using all the command line instructions in a single file.
3. Third section discuses spec sharp, a tool to determine runtime error without actually
executing the code.
4.1.1 Sigtest User Guide
It is a tool which generates test coverage using static analysis. It is actually a jar file which is
used to generate test coverage in two simple steps. Signature test tool operates from command
line to generate signature files. It can also be used to manipulate the signature file depending on
the user requirement. A signature file is a textual representation of all public and protected
members of an API. There is two simple way to generate the coverage of the API methods as
discussed below.
1. Generate the signature file of APIs under test.
2. Use this generated signature file to work out test coverage of APIs.
Both these steps are described below along with the syntax and logic.
4.1.1.1 Generate the signature file
The signature file can be generated by the following syntax [22]. The principle on which it
works is explained in the section after the syntax.
Java -jar sigtestdev.jar Setup [options]
27
Table 4.1 describes the available command options used and the values that they accept. For
more detail please refer to appendix B
Table 4.1: Setup Options
DescriptionOptional /
Required
Option
Path to one or more APIs under test.Required-classpath path
Name of the signature file to be created.Required-FileName
file_name.
It specifies a class or package to be included in the signature
file. The -package value acts as a filter on the set of classes
specified in -class path.
Optional-package
package_or_clas
s_name
The sigtestdev.jar file checks for the given options in the syntax and generates signature file. It
read all the jar files provided in the class path. Then it filters out packages and classes specified
in –package option and generate a file with all the public and protected members. The file
generated has the same name as provided in the syntax by –filename option. There is more
option to be used, but all of them are optional and hence can be used depending on the user
requirements. The signature file thus created is shown in figure 8 and explained after figure.
Figure 8: A small snippet to show the content of signature file
Figure 8 shows a small portion of the signature file generated. The first two lines is the header.
#Signature file v4.1 means that the file is a signature file and the version of the file is 4.1. The
version has special meaning and it depends on the version of jar file used. There are many
other versions (refer to appendix B). The second line is #version. This can be set to any number
by an option –version. (For more detail of use please refer to Appendix B). After these two
lines signature file contains the information about every class and its members. Every class is
declared with a key word CLSS. Every public and private method is described by meth. The
default and parameterized constructor is declared as cons. Enumeration is declared by inner
keyword. Super class is declared as supr. The private data members of the class are declared as
28
hfds. All the method name and the class name are fully qualified names. For example in figure
8, meth public wsapi.Server killServer (wsapi.OvmWsClient, wsapi.Server) throws
java.lang.Exception is a method. It returns an object of Server Class of wsapi package. It
accepts two arguments and throws an exception.
4.1.1.2 Generate the coverage report
This signature file created is used to generate test coverage report. The test coverage report can
be generated by this syntax given below [22]. The principle on which it works is explained in
the section after the syntax.
Java -jar apicover.jar [options]
Table 4.2 describes the options which have been used to generate test coverage report for this
project work. For more detail on usage please refer to appendix B.
Table 4.2: Test Coverage Options
DescriptionOptional /
Required
Option
Path to test classes.Required-ts path
Exclude all data members from all classes and interfaces.Optional-excludeFields
Path of the signature file to be used.Required-api
path/filename
Recursively include classes from the API package.Optional-apiInclude
packageName
Specifies the mode of test coverage calculation as w for
worst case or r for real world. Defaults to worst case. (refer
to coverage analysis mode)
Optional-mode [w | r]
Specifies the level of report detail as an integer from 0–4.
Defaults to 2 (Table 4.3 - Report File Contents Levels of
Detail )
Optional-detail
[0,1,2,3,4]
Specifies the report format as plain text or XML. Defaults
to plain.
Optional-format
[plain|xml]
Path to place the generated report file. Defaults to standard
out.
Optional-report
path/filename
The apicover.jar file is used to process the content signature file and the test classes specified
to generate test coverage report. The –ts option (from the table 3, or refer to appendix B for
29
more detail) requires the path of the test classes. The apicover.jar reads the members (public
and protected) of every class from the signature file and checks if it has been referenced in the
test classes or not. If the member is been referenced in test classes then it marked that member
as tested. This process keeps on going for all the member of every class present in the signature
file. After this it has total member and tested members. The apicover.jar has a class which
calculated the percentage of test coverage by the given mathematical formula:-
Test coverage (in %) = (Tested member/Total member) × 100
After this there are other options to display the test coverage result. For example to redirect the
test coverage report to a file, one needs to specify it using –report option. The apicover.jar
checks for this and redirect the output to this file. Similarly the apicover.jar checks for every
options and execute the result according to the options in the syntax (for more detail please
refer to appendix B).
Coverage Analysis Modes
The API Coverage tool uses these two modes of analysis [22]:
 Real World Mode: In this mode, the report file only contains the members of the class
of API under test. The class java.lang.object is the root of class hierarchy. The member
function of this class (like clone(), equals (), finalize(), hashCode(), toString()) is not
included in real mode. Hence in real mode we get accurate result.
 Worst Case Mode: In this mode the test coverage report includes all the methods (like
clone(), equals(), finalize(), hashCode(), toString()) in each class mentioned in the
signature file. Hence unlike real mode it reduces the overall test coverage percentage.
Table 4.3 - Report File Contents for Levels of Detail
Covered
Members
Members Not
Covered
Class
Summary
Package
Summary
Detail
Setting
X0
XX1
XXX2
XXX3
XXXX4
Table 4.3 [22] explains detail level used for generating the result. The detail means what
information to be included in test coverage report file. For example, to see test coverage of
package, class, tested member and members not tested in the test coverage report file, the
syntax needs to include –detail 4 (refer to appendix B). Similarly for all detail levels, report can
be generated as per user requirement.
30
4.1.2 Apache Ant
Static test coverage analysis does not require execution of test classes. It also involves more
than one instruction to generate test coverage report and also require lots of arguments per
instruction. To avoid rephrasing all instruction every time to generate a report, it is
recommended to use apache ant [23, 24]. Apache Ant is a command-line tool which is used to
execute instructions using build files. The main known usage of Ant is in Java applications.
Ant supplies a number of built-in tasks allowing to compile, assemble, test and run Java
applications. Apache ant [23, 24] is extremely flexible and does not have any coding
conventions or directory layouts for Java projects.
4.1.2.1 How to write a Simple Build file
Ant's [23, 24] build files are written in XML. Each build file contains one project and at least
one (default) target. Targets contain task elements. Each task element of the build file can have
an id attribute and can later be referred to by the value supplied to this. The value has to be
unique. For more detail to write a simple build.xml file, please refer to appendix C.
4.1.3 Spec Sharp
Static test coverage analysis does not allow the execution. There may be lots of errors which
may get reflected at run time and not at compile time. In that case it may throw an exception
and does not generate the correct output. Hence to avoid this problem there are tools to verify
run time check and to remove runtime errors without actually executing. One of such tool is
spec sharp.
Spec Sharp (also represented as Spec#4
) is a programming system that facilitates the
development of correct software. The Spec# language extends C sharp (C#) with contracts. The
spec sharp tool suite consists of a compiler that emits run-time checks for contracts, a static
program verifier that attempts to mathematically prove the correctness of programs [17, 18].
The diagram in figure 9 shows how spec sharp programming system works [20].
Figure 9 explains how spec sharp is used to generate the run time errors from a code without
actually running the code. When, a code is written in the spec sharp compiler. The compiler
generates the byte code of the given piece of code. Now this generated byte code gets
translated into boogie language by the translator. There is a verification generator to generate
all the verification condition which can be verified at run time. The Verification Condition
(V.C.) generator generates logical verification conditions from a Spec# program. After this it
uses an automatic reasoning engine (e.g. Z3 [20]) to analyze the verification conditions proving
the correctness of the program or finding run time errors. This automatic reasoning engine is
known as SAT [20] or SMT (Satisfiability Modulo Theories) solver. Z3 is the name of
Microsoft's SMT solver that implements a decision procedure for the formulas generated by
Boogie.
4
Online compiler at www.rise4fun.com/specSharp [19].
31
Figure 9: Flowchart to show how spec sharp works to generate the run time errors [20]
Spec sharp has 3 main method contracts as given below:
1. The precondition expresses the constraints under which the methods operate properly.
It is represented as “requires” keyword.
2. The postcondition expresses what will happen when a method executes properly. It is
represented as “ensures” keyword.
3. The loop invariant which has to be true before and after the loop. It is represented as
“invariants” keyword.
Figure 10 - Spec sharp example [20]
Figure 10 shows an example using all the method contracts. It is a method which takes an
integer as an argument and returns the square root of that integer value. The precondition of the
methods is that integer value should be greater than or equal to zero. The postcondition says
32
that result*result <= x && x < (result+1)*(result+1). It means that the square root of the
given value is always
1. Less than or equal to the value itself (result*result <= x), and
2. Less than the square of the successor of its square root (x < (result+1)*(result+1)).
Let’s explain this by taking two different values. First let’s take x = 0. When x is 0 its square
root is 0. Hence the post condition becomes 0 <= 0 && 0 < 1 which is true. Let’s take some
other value greater than 0 for e.g. 4. On taking x = 4, the postcondition becomes 4 <= 4 && 4
< 9. Hence the postcondition is true. Now invariant says that r*r <= x. Lets explain this also
by taking two different value. First value is 0. When x = 0, it checks the while loop. While loop
executed is (r+1)*(r+1) <= x. When x= 0 then this condition becomes false and “r” remains 0
before and after the loop hence the invariant holds true for 0*0 <= 0. Now let’s take another
example x = 4.
Loop
Sequence
value of ‘r’
before entering
loop
value of x Condition of loop
(r+1)*(r+1) <= x
value of ‘r’
after exiting
loop
1 0 4 1<= 4 , true r++ , 1
2 1 4 4 <= 4, true r++, 2
3 2 4 9 <= 4, false 2
At the third iteration, it exits from the while loop as the condition become false. Hence the
invariant holds true before and after the loop.
4.2 Dynamic Tool
Dynamic test coverage report is generated by executing the code on a real or virtual processor.
For a dynamic test coverage execution to be effective, the target code must be executed with
sufficient and necessary inputs to generate expected output. Dynamic test coverage execution
demonstrates the presence of errors (but not the absence). Dynamic test coverage, eclemma has
been used for this project work. The following section is a brief discussion of eclemma tool.
4.2.1 Eclemma
Eclemma5
[25, 26] is an open source Java code coverage tool for Eclipse. It generates test
coverage result into the eclipse workbench. While generating the test coverage result, testers
always focus on two aspects to analyze the coverage result:
1. Coverage Overview: The coverage view lists test coverage summaries of Java
projects, packages, classes, method etc. It can also be customized to show result
according to the user requirement. The figure 12 shows the coverage overview.
5
The eclemma plug-in for eclipse can be downloaded from http://update.eclemma.org/.
33
Figure 11: Coverage view on the basis of different counters
As shown in figure 11, top right corner has other counters like instruction counter, branch counter,
line counter, method counter, etc. It also allows expanding the projects to observe the test coverage
of a package, class and their methods. For example there is a config package which has 3 classes.
One of its classes is LAB.java. On expanding this class one can see member functions and it test
coverage.
2. Source highlighting: The result of a coverage session is also directly visible in the Java
source editors. A customizable color code highlights fully, partly and not covered lines
[25, 26]. Figure 12 shows the source highlighting of the source code of a particular
class. Green color shows the fully covered lines, yellow shows the partially covered
line and light red color shows the lines which were skipped. The lines which are
partially covered are usually a branch. A statement can either be fully covered or not
covered.
Figure 12 - Source highlighting to see partially covered, fully covered and skipped lines
34
3. Customize your result :- Eclemma is a open source hence it allow additional features
to generate test coverage according to user requirement:
● Different counters: Several metric to select like lines, instructions, methods,
branches, types or cyclomatic complexity to see the test coverage result.
● Multiple coverage sessions: One can switch between different test coverage
sessions. It allows running more than one test coverage session.
● Merge Sessions: It allows merging all test coverage session.
● Export the result: Coverage data can be exported in HTML, XML or CSV format
or as JaCoCo execution data files (*.exec) which can be use further.
● Run configuration: we can also customize the report to be generated.
By default eclemma has a default run configuration which shows the test coverage of ‘src’
folder and ‘test’ folder. But there is no need to be informed about the test coverage of test
classes. Test classes are meant to get the coverage of source code or any external jar files used.
Eclemma allows customizing the run configuration in eclemma to get result according to user.
Figure 13 show how to changing run configuration in eclemma. It is defining a new run
configuration which consists of the test coverage result of external jar file and source code. For
this project the external jar file consists of necessary classes for API and core functionality.
Figure 13: Customize your test coverage result according to requirement
35
4.3 Testing Tool
TestNG [27] is a testing framework inspired from JUnit and NUnit but it introduce some new
functionality that make it more powerful and easier to use. TestNG is designed to cover all
categories of tests like unit, functional, end-to-end, integration, etc. In test classes following
annotation have been used for experiment purpose. For more details on use please refer to
appendix D.
1. @BeforeClass: - It allows the method to run before the first test method in the current
class is invoked.
2. @BeforeMethod: - It allows the method to run before each test method, usually to set
up environment before every test.
3. @AfterMethod: - The annotated method runs after each test method, usually to clean6
environment after every test.
4. @AfterClass: - The annotated method runs after all the test methods have been
executed.
5. @Dataprovider: - It is used provide data for a test method. The method must return an
Object [][] where each Object[] can be assigned the parameter list of the test method.
6. @Test: - Marks a method as test.
4.4 Unified Modeling Language Tool
For the class design, use case diagram, package description diagram etc., a tool called Star
UML has been used. Unified Modeling Language (UML) is a general-purpose modeling
language, which offers a standard way to visualize a system. There is an eclipse plug-in called
Object Aid7
which allows creating class diagram in eclipse.
Figure 14: Process to create a file using Object Aid eclipse for Class Diagram
6
Clean means deleting the previous setting used in the last test.
7
It is available online at http://www.objectaid.com/installation.
36
Figure 14 shows how to create the file to make class diagram or sequence diagram. It also
allows exporting it as an image for further use. It is a very simple tool to use with 100%
accuracy. After creating the file right click anywhere on the file, select add and then select java
classifier. It allows adding all the java classes, interface etc. to include in the class diagram. It
also allows organizing the layout of the diagram automatically. Figure 14 and 15 is the
pictorial representation of all this steps.
Figure 15: Create Class Diagram Using Object Aid Plug-in for Eclipse
37
5. Project Overview
This chapter discusses the code structure, written for the experiments. The project is related to
industrial problem of measuring the test coverage of API and core functionality of WSAPI. So
to perform the experiments, representative codes were written. Representative means is it
similar to the work in the industry but it is a small portion of what was been practiced in the
industry. For example this code contains all the code related to server class and instead on
using real servers, the experiment is done using objects of classes. The test case selections were
also made on that basis of the specification used at industry. All the work is similar to the work
done in industry but it is like 1% or 2% of the total work and hence termed as representative
work. The reason for writing codes related to server class is that I started working in company
with server class and I was familiar with this class and its related operation. So I implemented
API and core functionality for thesis work related to server class only.
5.1 Representative Code Structure
Figure 16 shows the list of all java files, report files, build.xml files written to perform the
experiments. The project folder name is dissertation and it contains 6 main folders and 2 files.
The brief description of all of these is in the section below:
1. src: - This folder has 5 important packages, which contains the source code for API,
core functionality and other utility classes. The detail description of the functionality of
every class in all the packages is explained in later section of this chapter.
2. tests: - This folder includes all the test classes written for testing API and core
functionality.
3. lib: - This folder contains external jar files which is been used in the project, like
sigtestdev.jar (for creating signature file) and apicover.jar (for generating test coverage
report) using static test coverage analysis. This file also contains rt.jar file (it is run
time environment file, which contains all of compiled class files for the base Java
Runtime environment).
4. reports: - This folder contains all signature file and test coverage report generated by
static test coverage report using apache ant.
5. test-output: - This folder is automatically generated by testNG framework. It contains
html pages related to result, like passed test, failed test, skipped test, time taken to run
the test. This can be helpful in publishing result on a web link.
6. UML-Diagram: - This folder contains the class diagram. This class diagram is created
by the Object Aid eclipse plug-in.
7. build.xml: - The build.xml file is an ant script. This file is used to generate test
coverage report by static test coverage analysis.
8. build.properties: - This file is the property file for build.xml file. It contains the short
name for path calculation which is been used in build.xml file. Then build file contains
path specified in this file.
38
Figure 16: java file list created for the dissertation
39
5.2 Use Case Diagram
Figure 17 shows the high level use case diagram that product is supposed to do. The main
functionality is the communication of client code with OVMM using WSAPI. Client code gets
login to OVMM by uses three communication protocols (SOAP, REST-XML, and REST-
JSON). After login it is allowed to perform any the following operation at higher level. They
can also mix all these operation in any random ways. All the operations are shown in figure 17
and explained after the figure.
Figure 17: Use Case Diagram
40
The figure 17 shows all operations that a client can do using core functionality and API. After
login client can perform any of the following operation in any random sequence.
1. Discover Servers,
2. Create Server Pool,
3. Create Storage (Network File System (NFS), iSCSI, Fibre Channel (FC)),
4. Create Network,
5. Create Repository, and
6. Create Virtual Machine.
All the above operations are very high level description of operations to be performed by the
client code. For example after discovering server, client can modify its properties, like setting
NTP8
server(s), start server, kill server, restart server, stop server, restart server, delete server
etc.
5.3 Class Overview/Diagram
This section describes all class inside all packages. It has five important packages with some
Java classes in each package. The brief detail of each package and their respective classes are
shown in figure 18 and explained in the section after figure.
Figure 18: Important Packages for my dissertation
8
It is a short form of Network Time Protocol. It is a term used to define networking protocol for clock
synchronization between different computers over packet-switched, variable-latency data networks.
41
Figure 18 shows 5 important packages required to execute experiments. Brief discussion of all
of them is in the section followed.
1. Utils: - This package contain the following classes:
- JobUtils.java - This class contain all method which performs a job (mean a
function). Every method/function in the core functionality returns a job. This
class contains is a wrapper function for the all the method of the core
functionality. It allows the core functionality to finish the job by waiting for
some time. For example to discover a server, it goes in a loop until the server is
discovered and keeps on displaying the message on the console. In the mean
time the core functionality discovers the server and finishes the job.
- ServerUtils.java - This is the utility class for the operation to be performed on
the server like discover server, start server, kill server, stop server, delete server,
restart server etc. Inside this class, corresponding method from the JobUtils.java
class is called which thus perform the required function by directly calling the
methods from the core functionality.
- TestUtils.java - It is a class which contains all the method used for cleaning the
environment after a test method or a test class. It also contains the functions to
clean9
the configuration file depending on the user requirement. This file has
some methods which return the updated object after every operation. This is
useful in that sense that there is no need to refresh all objects explicitly every
time. Also there is no chance of error if someone forgets to refresh that object
before asserting it with the updated value.
2. Builder :- This class contains the following two classes:
- ServerBuilder.java - It has the wrapper function to discover server directly
using utility code. The utility code gets access to the available manager and
server from configuration file and then it calls core functionality to discover the
corresponding server. It actually implements the polymorphism concept. It has
two methods with same name but different configuration. One of the methods
accepts the objects of manager and server configuration file. The second method
accept all details of server like name, hostname, virtual IP, manager id etc.
- RandomStringGenerator.java - This class generates a random string of a
combination of alphabet, numbers and special characters. It is used as the
Universal Unique Identifier (UUID) for servers and managers. This is useful to
retrieve all the information about servers and managers using UUID. The length
of the string is equal to the integer value passed as argument. Public static
String generateRandomString (int length) is the method. It throws an exception
if the length passes as an argument is less than 1.
9
Clean means deleting the old configuration and setting it as default setting for another test or test class.
42
3. WSAPI: - This package contains the core functionality (OvmWsClient.java) and the
model objects (the API). To perform the experiment, only server.java class has been
written for the sake of simplicity. Also at start of thesis work, I was familiar/working
with this class at company work. Therefore I also choose this class for experiment
purpose. There were 90 classes 35 enumeration and 1 exception class in the original
API used in the company.
- ImplementationType.java - This class decide the type of communication
protocol (SOAP, REST-XML, REST-JSON) to be used based on days of the
week. It has a method login which checks for the day in calendar and then
decides to choose any one communication protocol for login.
- Login.java - This is an interface which has an abstract method. It is been added
to observe the test coverage of interface and its abstract method. The other
purpose of adding this class is to observe the behavior of call backs using both
test coverage approaches.
- OvmWsClient.java - This class is the core functionality. Every method in this
class returns a job. For the experiment purpose job related to server has been
implemented. In actual class used in the industry, this class is 435 methods for
all the core functionality. It include jobs like server discovery, network creating,
server pool creating, adding and removing server from the pool, etc.
- Server.java - This class is the model class. It has all the getter and setter of
server attributes. In actual industry project there are 90 model objects like
server, server-pool, network, storage etc.
4. Config: - This package has the class with configuration file10
of all the available
servers and managers. Every tester has given 2 servers and 1 manager to run their own
test. And they use the same class to fetch their own servers and manager. By default 3
servers and 2 managers were included in the configuration file for the project setup.
- LAB.java – This class contains the information about the server and manager
stored. It is used to fetch the servers and managers during the test.
- ManagerConfig.java - This class contains all the setter and getter of the
attributes of a manager at client side.
- ServerConfig.java - This class contains all setter and getter of the attributes of
a server. The client code gets server configuration from Lab.java class. Now it
uses object of ServerConfig.java class and passed it to the other utility class
which then calls the method core functionality. And here the actual conversion
of data type takes place. The returned object is of server class. The conversion
of object from one data type to another takes place in the method which calls the
method of the core functionality.
10
A configuration file is a file which has default manager and servers stored for experiments.
43
5. Parser: - This package contains one interface which is use to make @ReadOnly11
annotation. This annotation is used to make a method as read only to identify them and
later this can be used to remove from the test coverage report. It also has one java
reflection class to parse all those methods which are annotated with @ReadOnly. There
are some attributes which cannot be changed by the setter method, even if we use them,
hence all such methods are declared as read-only method. It does not mean that those
methods cannot be used, but it means that they don’t have any effect even if they are
used.
- AnnotationParsing.java - This class used java reflection to parse the class
which contains the @ReadOnly method. All those methods are removed from
the signature file to get more accurate test coverage report.
- ReadOnly.java - This is an interface which defines the @ReadOnly annotation
using java @Target and Retention policy. Figure 19 below show the how to
define our own annotation.
Figure 19: Define your own annotation using java
- Bicycle.java – This class is a super class written for the experiment of
inheritance.
- MountainBike.java – This is sub class written for the experiment of
inheritance.
Figure 19 shows how to define our own annotation using java retention policy. @Target
specifies that this annotation can be applied to a method. @Retention specifies that the
annotation is kept at runtime. The name of the interface can be used as the name of the
annotation. It is also same as the name of class file
The diagram shown in figure 20 is high level class diagram. It includes all the classes including
test class. It shows the diagram according to the call of every class from the test class. The
diagram is showing firstly the call of test class (WsApiCoverageTest). Then the test class is
calling ServerBuilder.java, ServerUtils.java, LAB.java, TestUtils.java and Server.java directly
in the test class. Now these classes call other classed. Similarly all other classes are called with
the same hierarchy call level. It shows three test classes and all other classes called in them.
The multiplicity is shown as 0..1 and m_ovs shows this is been called by this object. Low level
class diagram is shown in appendix E with more detail.
11
Read Only methods are those methods which does not have any effect, even if we use it. For example some of
the attributes of a class cannot be changed by setter method and hence, this setter method acts as read only. To
identify those types of methods we need an annotation as @ReadOnly.
44
Figure 20: High Level Class Diagram and it call start from the test class for server
operation
45
6. Test Experiments and Results
This chapter develops a model of API testing using the model of GUI testing as a reference.
Then it derives the metrics from the model of API testing. Results are shown of experiments
executed using static and dynamic test coverage approaches based on the metrics derived. This
chapter is divided into three subsections as follows:
1. First section develops a model of API testing and derives metrics from this model.
2. Second section is describing experiments executed on the basis of those metrics using both
test coverage approaches.
3. Second section is describing experiments executed on the basis of some extra metric
measured by static test coverage analysis.
6.1 Model of API Testing
This section develops a model of API testing [13]. It uses the model of GUI testing [14] as a
reference. It first discusses model of GUI testing and then model of API testing.
6.1.1 Model of GUI Testing
In GUI testing (as opposed to system testing over the GUI interface), testing are evaluated to
make sure the following criterion works correctly [28]:
1. Make sure that GUI is correctly structured for e.g.
a. The right elements on the right pages
b. The right navigation between pages
2. Make sure that graphics elements are correctly presented.
3. Make sure that user inputs are correctly interpreted into internal (normally binary)
format: e.g. "10" to the value 10 (00001010 binary).
4. Make sure that user inputs result in the correct actions for e.g.
a. Calling the correct methods to do validation of input data.
b. Calling of the correct methods to execute the "action" of buttons etc.
c. Calling the correct "action" methods for pull-down and popup menus.
d. Calling the correct "action" methods for keyboard shortcuts.
e. Calling the correct "action" methods for mouse actions.
5. Make sure that outputs from the program are correctly displayed via the GUI to the user
(that is it is correctly converted from internal format to GUI format such as text,
graphics elements such as dials and indicators, etc.) and then shown in the correct GUI
element (not mixed up).
46
6.1.2 Model of API Testing
Similarly based on the model of GUI testing the model of API testing can be developed to
evaluate the following coverage criteria.
1. Ensure that the API is visible with the correct protections for eg.
a. private int integer;
b. protected String str;
c. public void print(); etc.
2. Ensure the correct conversion from 'external' to 'internal' format for eg.
a. int stringLength(String str),
b. String generateRandomString(int), etc.
3. Ensure the correct transfer of input parameters to the actual internal methods called.
4. Ensure that the correct internal classes/methods are called from/via the API
(particularly important in the case of extending a class or implementing an interface).
5. Ensure that the outputs are correctly passed back over the API to the calling program
(correct callbacks are executed).
6. Ensure that we are getting the correct response in the form of data, data type, order of
data, and completeness of information.
Based on the model of API testing develops, Following metrics were derived to evaluate
experiments using both static test coverage analysis and dynamic test coverage execution.
1. Method/Class/Package test coverage. (General Test Coverage)
2. Interface and their abstract members test coverage. (Criteria 1,4)
3. Callbacks test coverage. (Criterion 5)
4. Data type conversion test coverage. (Criterion 2)
5. Inherited Class/Method test coverage. (Criteria 3,4)
6. Direct and indirect classes/method test coverage. (Criterion 6)
The following section of this chapter is discussing the test coverage all the following
mentioned metrics using both static and dynamic test coverage approaches.
6.2 Experiments on Metrics Derived From the Model of API Testing
This section discusses the experiments performed and result obtained using both static and
dynamic test coverage on the basis of all the metric derived from the model of API testing.
Every experiment is divided into two parts. One discusses the experimental setup and second
discusses the result obtained from the experiment. This section also shows table 6.1 giving
brief information about all experiments, purpose and section which describes the experiment
setup and result. All the results obtained are explained and evaluated in next chapter.
47
Table 6.1: Brief description of all the experiments
Experiment
No.
Purpose of Experiments Section
1 To determine test coverage result of all package, class and
methods using same number of test cases.
6.2.1
2 To evaluate test coverage result of Interface and its abstract
members.
6.2.2
3 To evaluate test coverage result and execution of callbacks. 6.2.3
4 Test coverage of methods which converts object of one data type
to another.
6.2.4
5 Test coverage result of inherited class and members functions. 6.2.5
6 Test coverage of class which is directly and indirectly accessed by
test class.
6.2.6
7 Test coverage result of default constructor. 6.3.1
8 Test coverage result of attributes (data members) of a class. 6.3.2
9 How to remove those members of class, which are not useful,
from test coverage report like read only members, constructors,
enumerations etc. to determine more accurate results.
6.3.3
Note: Experiment number 7, 8, 9 are not based on the metrics derived from the model of API.
These additional experiments were observed by me during executing other experiments. These
three experiments are discussed in 6.3 Experiments on Additional Metrics Covered By
Static Test Coverage Analysis.
6.2.1 Method/Class/Package Test Coverage
This experiment was executed to observe the package, class and method coverage by both
static and dynamic test coverage approaches.
6.2.1.1 Experiment 1
Same test cases in one test class were written to observe test coverage using both test coverage
approaches. Test written for this experiment is shown in figure 21.
Figure 21: Test class for testing core functionality and API
48
The class shown in figure 21 is tests related to testing the core functionality of server class.
This test were analyzed by static test coverage and executed by dynamic test coverage to
observe if they both give the same of different test coverage of class, package and methods in
every class. This test class includes every test case to test methods only.
6.2.1.2 Result 1
The result shown in figure 22 contains result using dynamic (left side) and static (right side)
test coverage. From left side of figure, it can be seen that in wsapi package have over all test
coverage of 93.33% by static analysis and 93.5% by dynamic execution. These two results are
almost same. There are only two differences as follows:
1. One difference is that the dynamic test coverage execution does not include interface test
coverage (It is discussed in detail in next experiment). But the static test coverage includes
interface and its abstract method test coverage in result.
2. The second difference is that static test coverage considers enums as separate class but
dynamic consider it as a member of class. This leads to different test coverage for the class and
enumeration. But it does not have too much difference in the overall test coverage percentage.
Figure 22: Class/Method Test Coverage By dynamic (left) and static (right) test coverage
approaches
6.2.2 Interface and Their Abstract Members Test Coverage
This experiment is to show how Java interfaces and its abstract member test coverage is
generated. A class implements an interface, thereby inheriting the abstract methods of the
interface. An interface is not a class they differ in the following way.
 A class describes the attributes and behaviors of an object.
 An interface contains behaviors that a class implements.
49
The abstract methods are declared in the interface and defined in the classes implementing the
interface. An Interface is a collection of abstract methods.
6.2.2.1 Experiment 2
An interface was written in “wsapi” package. It has an abstract method “methodToCallBack()”.
This interface is implemented by the ManagerConfig.java Class in the config package. Now in
test class m_wsapi = m_ovmm.getWsApiConnection() is called to create the connection
between client and OVMM. The “getWsApiConnection()” (is a method of the class
implementing the interface) calls the abstract method.
6.2.2.2 Result 2
Figure 23: Interface and abstract method coverage by static (right) and dynamic (left)
approach
As shown in figure 23, the result of test coverage by static approach is on right side. The lines
in blue color are interface and its abstract method. It is observed that they have been marked as
covered under “wsapi” package. On the left side, is the test coverage report generated by
dynamic tool, eclemma [25, 26]. In this report, the interface is not mentioned in the “wsapi”
package. And on observing the class, which implements the interface, “ManagerConfig.java”
the abstract method “methodToCallBack()”becomes the member function of this class in the
test coverage report. This experiment shows that:
1. Static test coverage analysis shows the test coverage of interface with the abstracts member
in the package it was defined.
2. Dynamic tool does not show interfaces. But it shows the test coverage of abstract methods of
the interface as a member function of the class which implements the interface.
50
6.2.3 Callbacks Test Coverage
Callbacks refer to a mechanism in which a library or utility class provides a service to other
class that is unknown to it when it is defined. In term of computer programming, callbacks are
defined as a piece of executable code that is passed as an argument to other code, which is
expected to callback or execute the argument. The UML diagram of working callback is shown
in figure 24. There is an interface, Login.java and an abstract method, methodToCallBack()
declared inside the interface. A Java class which implements an interface has a method which
calls another method of same class or another class. Then this called method calls the abstract
method of the interface. This process is referred as callback.
Figure 24: Uml Diagram to explain the working of Callback
6.2.3.1 Experiment 3
The implementation for the callback is shown in figure 25. There is a interface “Login” and it
has an abstract method “methodToCallback()”. ManagerConfig.java implements this interface.
This class has a method which call another method “Callback()” of same class. Inside
Callback() the abstract method of the interface is called.
Figure 25: Callback Implementation and its call using other class in java
51
6.2.3.2 Result 3
Figure 26: Callback test coverage by static (right) and dynamic (left) approach
Figure 26 shows the result of test coverage generated by static test coverage analysis on the
right side and dynamic test coverage execution on the left side.
Static test coverage shows the interface and its abstract method test coverage in the package it
was declared. But dynamic test coverage execution does not show the interface test coverage in
the “wsapi” package. And the abstract method “methodToCallBack()” becomes the member
function of the class which implements the interface.
Rest of the test coverage is same for both the test coverage method. Only difference is in the
test coverage interface and its abstract methods. This result is same as the shown in experiment
6.2.2 Interface and Their Abstract Members Test Coverage.
From these two experiments it is observed that the static test coverage works on method
declaration while dynamic test coverage methodology works on method definition.
6.2.4 Data Type Conversion Test Coverage
Method conversion is a process in which a method accepts argument of different data type and
return objects of different data types. These types of methods can generate bug, it is important
to test these methods and observe the test coverage of all these method. There were some of the
utility codes written which were actually converting the object of configuration type into the
object of API by using the core functionality. For example, two functions shown in figure 27.
52
Figure 27: Example code for conversion of data type of objects
6.2.4.1 Experiment 4
The methods shown in figure 27 are the two different methods of discovering servers. Both
methods are having same name and return objects of server class. The difference between the
two methods is that both accept different arguments. These are other method also which offers
same functionality by calling methods of core functionality. Test cases were written just to call
all of such method. There are 4 classes which performs the data type conversion. These are as
follows:
 ServerBuilder.java
 JobUtils.java
 ServerUtils.java
 RandomStringGenerator.java
Test class were having all the test cases which call all the method of these above mentioned
class to observe the test coverage by both static test coverage analysis and dynamic test
coverage execution.
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849
CS640_Thesis_AbulAalaAlamsBari13250849

More Related Content

Similar to CS640_Thesis_AbulAalaAlamsBari13250849

A STUDY OF FORMULATION OF SOFTWARE TEST METRICS FOR INTERNET BASED APPLICATIONS
A STUDY OF FORMULATION OF SOFTWARE TEST METRICS FOR INTERNET BASED APPLICATIONSA STUDY OF FORMULATION OF SOFTWARE TEST METRICS FOR INTERNET BASED APPLICATIONS
A STUDY OF FORMULATION OF SOFTWARE TEST METRICS FOR INTERNET BASED APPLICATIONSecij
 
Advanced Verification Methodology for Complex System on Chip Verification
Advanced Verification Methodology for Complex System on Chip VerificationAdvanced Verification Methodology for Complex System on Chip Verification
Advanced Verification Methodology for Complex System on Chip VerificationVLSICS Design
 
ANALYSIS OF SOFTWARE SECURITY TESTING TECHNIQUES IN CLOUD COMPUTING
ANALYSIS OF SOFTWARE SECURITY TESTING TECHNIQUES IN CLOUD COMPUTINGANALYSIS OF SOFTWARE SECURITY TESTING TECHNIQUES IN CLOUD COMPUTING
ANALYSIS OF SOFTWARE SECURITY TESTING TECHNIQUES IN CLOUD COMPUTINGEditor IJMTER
 
Automated testing-whitepaper
Automated testing-whitepaperAutomated testing-whitepaper
Automated testing-whitepaperimdurgesh
 
IRJET- Development Operations for Continuous Delivery
IRJET- Development Operations for Continuous DeliveryIRJET- Development Operations for Continuous Delivery
IRJET- Development Operations for Continuous DeliveryIRJET Journal
 
Automation Testing of Web based Application with Selenium and HP UFT (QTP)
Automation Testing of Web based Application with Selenium and HP UFT (QTP)Automation Testing of Web based Application with Selenium and HP UFT (QTP)
Automation Testing of Web based Application with Selenium and HP UFT (QTP)IRJET Journal
 
Test Automation Strategy for Frontend and Backend
Test Automation Strategy for Frontend and BackendTest Automation Strategy for Frontend and Backend
Test Automation Strategy for Frontend and BackendArshad QA
 
Online Examination System Project report
Online Examination System Project report Online Examination System Project report
Online Examination System Project report SARASWATENDRA SINGH
 
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGFROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
 
From the Art of Software Testing to Test-as-a-Service in Cloud Computing
From the Art of Software Testing to Test-as-a-Service in Cloud ComputingFrom the Art of Software Testing to Test-as-a-Service in Cloud Computing
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
 
Online examination system
Online examination system Online examination system
Online examination system IRJET Journal
 
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNINGA SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNINGIRJET Journal
 
Investigating Geographic Information System Technologies A Global Positioning...
Investigating Geographic Information System Technologies A Global Positioning...Investigating Geographic Information System Technologies A Global Positioning...
Investigating Geographic Information System Technologies A Global Positioning...Simon Sweeney
 

Similar to CS640_Thesis_AbulAalaAlamsBari13250849 (20)

A STUDY OF FORMULATION OF SOFTWARE TEST METRICS FOR INTERNET BASED APPLICATIONS
A STUDY OF FORMULATION OF SOFTWARE TEST METRICS FOR INTERNET BASED APPLICATIONSA STUDY OF FORMULATION OF SOFTWARE TEST METRICS FOR INTERNET BASED APPLICATIONS
A STUDY OF FORMULATION OF SOFTWARE TEST METRICS FOR INTERNET BASED APPLICATIONS
 
Advanced Verification Methodology for Complex System on Chip Verification
Advanced Verification Methodology for Complex System on Chip VerificationAdvanced Verification Methodology for Complex System on Chip Verification
Advanced Verification Methodology for Complex System on Chip Verification
 
ANALYSIS OF SOFTWARE SECURITY TESTING TECHNIQUES IN CLOUD COMPUTING
ANALYSIS OF SOFTWARE SECURITY TESTING TECHNIQUES IN CLOUD COMPUTINGANALYSIS OF SOFTWARE SECURITY TESTING TECHNIQUES IN CLOUD COMPUTING
ANALYSIS OF SOFTWARE SECURITY TESTING TECHNIQUES IN CLOUD COMPUTING
 
Automated testing-whitepaper
Automated testing-whitepaperAutomated testing-whitepaper
Automated testing-whitepaper
 
IRJET- Development Operations for Continuous Delivery
IRJET- Development Operations for Continuous DeliveryIRJET- Development Operations for Continuous Delivery
IRJET- Development Operations for Continuous Delivery
 
new anu resume
new anu resumenew anu resume
new anu resume
 
Automation Testing of Web based Application with Selenium and HP UFT (QTP)
Automation Testing of Web based Application with Selenium and HP UFT (QTP)Automation Testing of Web based Application with Selenium and HP UFT (QTP)
Automation Testing of Web based Application with Selenium and HP UFT (QTP)
 
QAIBP
QAIBPQAIBP
QAIBP
 
Test Automation Strategy for Frontend and Backend
Test Automation Strategy for Frontend and BackendTest Automation Strategy for Frontend and Backend
Test Automation Strategy for Frontend and Backend
 
Paper review
Paper reviewPaper review
Paper review
 
Testing
TestingTesting
Testing
 
Online Examination System Project report
Online Examination System Project report Online Examination System Project report
Online Examination System Project report
 
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGFROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTING
 
From the Art of Software Testing to Test-as-a-Service in Cloud Computing
From the Art of Software Testing to Test-as-a-Service in Cloud ComputingFrom the Art of Software Testing to Test-as-a-Service in Cloud Computing
From the Art of Software Testing to Test-as-a-Service in Cloud Computing
 
Online examination system
Online examination system Online examination system
Online examination system
 
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNINGA SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
 
thesis
thesisthesis
thesis
 
thesis
thesisthesis
thesis
 
Txet Document
Txet DocumentTxet Document
Txet Document
 
Investigating Geographic Information System Technologies A Global Positioning...
Investigating Geographic Information System Technologies A Global Positioning...Investigating Geographic Information System Technologies A Global Positioning...
Investigating Geographic Information System Technologies A Global Positioning...
 

CS640_Thesis_AbulAalaAlamsBari13250849

  • 1. 1 Evaluation and Comparison of Static Test Coverage Analysis and Dynamic Test Coverage Execution in API Testing Abul Aala Almas Bari (13250849) Industrial Thesis 2015 M.Sc. Computer Science (Software Engineering) Department of Computer Science National University of Ireland, Maynooth County Kildare, Ireland A dissertation submitted in partial fulfillment Of the requirements for the M.Sc. in Computer Science (Software Engineering) Head of Department: Dr Adam Winstanley Supervisor: Dr. Stephen Brown January 2015 Word Count: 26309
  • 2. 2 Declaration I hereby certify that this material, which I now submit for assessment on the program of study leading to the award of Master of Science in computer science (Software engineering), is completely my own work and has not been taken from the work of others. The extent such work has been cited and acknowledged within the text of my work. Signed: Abul Aala Almas Bari Date: 25th Jan 2015
  • 3. 3 Acknowledgement I am using this opportunity to express my gratitude to everyone who supported me throughout the course of this M.Sc. dissertation. I am thankful to Mr. Dr. Stephen Brown, Department of computer science, National University of Ireland, Maynooth for his aspiring guidance, invaluably constructive criticism and friendly advice during the project work. I am sincerely grateful to him for sharing his truthful and illuminating views on a number of issues related to the project. I also thanks to him for his invaluable technical guidance, great innovative ideas and overwhelming moral support during the course of the project. Thanks Abul Aala Almas Bari (13250849) Under the Guidance of Mr. Dr. Stephen Brown Email: sbrown@cs.nuim.ie National University of Ireland, Maynooth
  • 4. 4 Abstract With the popularity of service computing, Web Services are being used by companies and organizations. Web Services APIs are developed using Service Oriented Architecture (SOA). An Application Programming Interface (API) is a set of services, used by programmers to interact with other software. The difference between APIs and Web Services is that latter facilitates interaction between two different computers but former acts as an interface between both computers. APIs of Web Services can be tested by making an API call and observing expected output along with response time. Along with this API testing also need to have some useful metrics, which can determine the proper functionality of the API. This project is inspired from an industrial problem: how best to measure the test coverage of API and core functionality provided by a Web Services. There are two ways of measuring test coverage, one is static test coverage analysis and another is dynamic test coverage execution. Both techniques can generate a test coverage report, which helps to observe gaps (areas which are not tested) in testing API. This paper develops a model of API testing, which is used to derive metrics. Both test coverage techniques are compared on the basis of metrics derived from model of API testing. This helps to determine which test coverage approach generates a better result. This paper concludes with advantages or/and disadvantage of static test coverage analysis over dynamic test coverage execution. It also presents results generated using both techniques. The conclusion of this paper is that a combination of static test coverage analysis and dynamic test coverage execution is most advantageous.
  • 5. 5 Table of Contents Title Page ................................................................................................................................. 1 Declaration.................................................................................................................................2 Acknowledgement .....................................................................................................................3 Abstract......................................................................................................................................4 1. Introduction............................................................................................................................9 1.1 Problem Statement.........................................................................................................10 1.2 Motivation......................................................................................................................10 1.3 Aims and Objectives......................................................................................................11 1.4 Approach........................................................................................................................11 1.5 Report Structure.............................................................................................................12 2. Background of Industrial Work ...........................................................................................14 3. Related Work .......................................................................................................................19 3.1 Software Quality Measurement .....................................................................................19 3.2 Introduction to WSAPI ..................................................................................................20 3.3 API Testing....................................................................................................................20 3.4 Test Coverage ................................................................................................................21 3.4.1 Static Test Coverage Analysis ................................................................................21 3.4.2 Dynamic Test Coverage Execution ........................................................................23 3.5 Combined Study.............................................................................................................23 4. Tools and Techniques Used.................................................................................................25 4.1 Static Tool......................................................................................................................26 4.1.1 Sigtest User Guide ..................................................................................................26
  • 6. 6 Table of Contents 4.1.1.1 Generate the signature file ...............................................................................26 4.1.1.2 Generate the coverage report ...........................................................................28 4.1.2 Apache Ant .............................................................................................................30 4.1.2.1 How to write a Simple Build file.....................................................................30 4.1.3 Spec Sharp ..............................................................................................................30 4.2 Dynamic Tool ................................................................................................................32 4.2.1 Eclemma .................................................................................................................32 4.3 Testing Tool...................................................................................................................35 4.4 Unified Modeling Language Tool .................................................................................35 5. Project Overview .................................................................................................................37 5.1 Representative Code Structure.......................................................................................37 5.2 Use Case Diagram..........................................................................................................39 5.3 Class Overview/Diagram...............................................................................................40 6. Test Experiments and Results..............................................................................................45 6.1 Model of API Testing ....................................................................................................45 6.1.1 Model of GUI Testing.............................................................................................45 6.1.2 Model of API Testing .............................................................................................46 6.2 Experiments on Metrics Derived From the Model of API Testing ...............................46 6.2.1 Method/Class/Package Test Coverage....................................................................47 6.2.1.1 Experiment 1....................................................................................................47 6.2.1.2 Result 1 ............................................................................................................48 6.2.2 Interface and Their Abstract Members Test Coverage...........................................48 6.2.2.1 Experiment 2....................................................................................................49 6.2.2.2 Result 2 ............................................................................................................49 6.2.3 Callbacks Test Coverage.........................................................................................50 6.2.3.1 Experiment 3....................................................................................................50 6.2.3.2 Result 3 ............................................................................................................51 6.2.4 Data Type Conversion Test Coverage ....................................................................51
  • 7. 7 Table of Contents 6.2.4.1 Experiment 4....................................................................................................52 6.2.4.2 Result 4 ............................................................................................................53 6.2.5 Inherited Class And Method Test Coverage...........................................................54 6.2.5.1 Experiment 5....................................................................................................54 6.2.5.2 Result 5 ............................................................................................................54 6.2.6 Direct and Indirect Classes/Method Test Coverage................................................56 6.2.6.1 Experiment 6....................................................................................................56 6.2.6.2 Result 6 ............................................................................................................56 6.3 Experiments on Additional Metrics Covered By Static Test Coverage Analysis .........57 6.3.1 Default Constructor Test Coverage ........................................................................57 6.3.1.1 Experiment 7....................................................................................................57 6.3.1.2 Result 7 ............................................................................................................58 6.3.2 Attributes Test Coverage ........................................................................................59 6.3.2.1 Experiment 8....................................................................................................59 6.3.2.2 Result 8 ............................................................................................................59 6.3.3 Read-Only Methods Removal from Test Coverage Report....................................60 6.3.3.1 Experiment 9....................................................................................................60 6.3.3.1 Result 9 ............................................................................................................63 6.4 Results Summary ...........................................................................................................64 7. Evaluation ............................................................................................................................65 7.1 Metrics Derived from Model of API testing..................................................................65 7.1.1 Method, Class, and package coverage ....................................................................65 7.1.2 Interface and their abstract members test coverage ................................................66 7.1.3 Callbacks test coverage...........................................................................................67 7.1.4 Data type conversion test coverage.........................................................................68 7.1.5 Inherited Class/Method test coverage.....................................................................70 7.1.6 Direct and indirect classes/method test coverage ...................................................72 7.2. Extra metrics covered by Static test coverage analysis ................................................73 7.2.1 Default Constructor test coverage...........................................................................73
  • 8. 8 Table of Contents 7.2.2 Attributes test coverage...........................................................................................74 7.2.3 Read Only Methods ................................................................................................75 8. Conclusion and Future Work...............................................................................................76 8.1 Extra Metrics covered by Static test coverage analysis.................................................76 8.2 Limitations and Advantage of Static Test Coverage Analysis ......................................77 8.2.1 Major Source of Error.............................................................................................77 8.2.2 Advantages of Static Coverage Analysis................................................................78 8.3 My Contribution.............................................................................................................78 8.4 Future Work...................................................................................................................79 8.4.1 Extend the Model of API Testing...........................................................................79 8.4.2 Develop a Tool which combine both Results .........................................................79 References................................................................................................................................80 Appendix A: Terminology.......................................................................................................83 Appendix B: Use of Sigtest......................................................................................................84 Appendix C: Use of Apache Ant .............................................................................................89 Appendix D: Use of TestNG framework.................................................................................91 Appendix E: High Level Class Diagram .................................................................................93
  • 9. 9 1. Introduction Software quality has three main aspects: functional quality (the tasks which software is intended to do), structural quality (means code is well structured), and last but not the least process quality (means process involved in software development). Measurement has always been fundamental to the progress of software testing. Software testing is one of the approaches to measure software quality. Software testing involves dynamic verification [39] of the behavior of program on a finite set of test cases based on specification (expected output on given input) or expert opinion. Software testing also plays an essential role in software maintenance life cycle [6]. It is often practiced by industries to determine and improve quality of the software. Software testing can also produce a test coverage report. Test coverage is the extent that a structure has been exercised as a percentage of the item being tested or covered. Test coverage helps in evaluating effectiveness of testing by providing data on different coverage items [6]. Test coverage also helps to show areas (gaps in software testing) where it needs further testing. There are two approaches for measuring test coverage, namely, static test coverage and dynamic test coverage. This paper discusses the major difference in the process of generating test coverage result using both methodologies. For example how interface and their abstract member functions behaved using both methodologies? How both test coverage methodologies behave in case of inherited members? How the default constructor does get invoked in test generation report for static members of the class. The below section explains a bit of how both approach works. 1. A dynamic test coverage tool, like eclemma [25, 26] is used with an executed program (over some inputs), to observe the executed code and analyze the results. It is used to measure test coverage when testing the Correctness, Performance (optimization), finding runtime bugs and testing the quality of the software [35]. Dynamic execution is based on a concrete execution: it can be very slow if exhaustive, precise as it has no approximation, and needs no assumptions, and unsound1 as it cannot be generalized. Dynamic test coverage execution can also automatically search for assertion violations, divergences, and live locks [34]. The process of using dynamic tool to generate the test coverage report dynamically is referred as dynamic test coverage execution in the project. 2. Static code analysis is the analysis of code without actually executing the code [33]. A static tool scans the class files to determine which would be executed and hence marked them at covered. It is conservative and sound but it makes assumptions. It works on the declaration2 of a member function rather than method definition3 . The static analysis shows "reference tree" rather than a "call tree" i.e. methods that could be called, but not ones that necessarily are. The process of using static tool to generate the test coverage report statically is referred as static test coverage analysis in the project. 1 Unsound means something which is not based reliable evidence or reasoning. 2 Method declaration means declaring the fully qualified method name for e.g. int stringLenth(string a); 3 Method definition means defining the actual function of the method for e.g. int stringLenth(string a){ return a.length(); }
  • 10. 10 1.1 Problem Statement Can static test coverage analysis give extra test coverage results as compared to dynamic test coverage? For e.g. can static test coverage analysis provide test coverage measurement of the invocation routes for indirect method calls? 1.2 Motivation In the past years, Services-Oriented Architecture (SOA) [15] has become a widely adopted paradigm to create loosely coupled distributed systems. One of the characteristics of SOA is that Application Programming Interface (API) of the available Web Services is published to serve customers using standard machine-readable description languages. Web Services Description Language [1] (WSDL) helps in achieving this for Web Services. These API needs to be tested in order to ensure proper functionality. While testing, usually two types of measurement are considered: requirements test coverage measurement (Black-Box testing) and structural test coverage measurement (White-Box testing). Testing is used to produce test coverage reports, which can be used to determine the area which needs further testing. Some of the basic test coverage metrics like branch coverage or statement coverage require access to the source code, which is not always possible for Web Services. This is why Web Services testing approaches (test cases) usually rely on the API definition, and hence the focus is on black-box testing [16]. The motivation always includes a mix of business and technical. This section discusses both the technical and business motivation briefly. The business motivation of doing this project arises from the problem faced by my team during my internship. My team members in Dublin were writing test cases in java for testing Web Services Application Programming Interface (WS-API). The detail discussion of industrial background is discussed in chapter 2: Background of Industrial Work. When test cases are written for thousands of files, then it is hard to remember which one has been tested and which one is remaining. Test coverage report helps in achieving this. Test coverage report allows determining what has been tested and what is left. On the basis of coverage report one can prioritize the work in those areas which is more important to test. This was the primary question to determine how much of the API and core functionality have been tested and which area needs further testing. This leads to the business motivation for my project. The technical background was based on the metrics derived from the model of API testing. These metrics are used as a base for the comparison between both static test coverage analysis and dynamic test coverage execution. The model of API testing is developed and discussed in chapter 6. Test Experiments and Results. The metrics is derived from the model of API testing in the same chapter, in the section following the model of API testing. The motivation of this project is to evaluate the test coverage report using static test coverage analysis and dynamic test coverage execution after testing the API. The effectiveness is defined by measuring the test coverage report of the all the metrics derived from the model of API testing. The motivation also aims to identify and determine the similarity, difference, advantage and disadvantage of static test coverage over dynamic test coverage and vice versa. It also determines the extra test coverage metrics generated using static test coverage analysis.
  • 11. 11 1.3 Aims and Objectives As shown in figure 1, it is very easy to understand that here it needs to measure the branch coverage, statement coverage etc. But the situation is not same in case of measuring API test coverage. An API is an important form of software reuse and it has been widely used. A model of API testing is required to derive the metrics, which can be used as a base for the comparison between static test coverage analysis and the dynamic test coverage execution. Figure 1: Example Code The thesis contains the following objectives to answer the research question. 1. Develop a model of API testing and then derive the metrics for test coverage from the model of API testing. 2. Determine the test coverage report of API using both static and dynamic test coverage. 3. Identify and determine the differences and similarity between the two test coverage. 4. Identify and determine the advantage or/and disadvantage of using static approach over dynamic approach. 5. Identify and determine the test coverage measure of Invocation route of methods which is not measured directly by the dynamic tool. 1.4 Approach This thesis is based on an industrial problem. Hence I need to designed classes according to the work I was doing in my organization during my internship. The proposal of this project is to evaluate the static and dynamic test coverage of API. In order to answer the research question, following steps were proposed: ● Select representive java files Figure 16 in chapter 5. Project Overview shows the java files used for measuring API test coverage. Those important functional-related java files were selected (such as ImplementationType.java, Login.java Server.java etc,) to perform the experiments in order to obtain useful test coverage result and its evaluation. The core functionality is written by OvmWsClient.java file. These java files were implemented as same as the work of industry. The reason for making representative code is that company doesn’t
  • 12. 12 allow giving their own code for thesis work. So those files were written which I was working at the start of thesis work. ● API testing using black box testing Black box testing is sometimes also referred to as functional testing or specification based testing [28]. The test cases were selected on the basis of specification given and expert opinion. Some additional positive tests and negative test were included from the manual tests. Three Black-Box testing approaches (like Equivalence Partitioning (EP), Boundary Value Analysis (BVA), and Combinations of Inputs (CI) etc.) were used in testing each method in each selected java files of API and core functionality. The main objective was to test the API and core functionality. Negative test include tests which are expected to fail. ● Identify and determine static and dynamic test coverage After testing, the next task is to generate the test coverage report using the static test coverage analysis and dynamic test coverage execution. It is based on all the metric derived from the model of API testing. The main work is in this section. It determines the behavior of test coverage result generated using both test coverage approaches using all the metrics. For example, how abstract method test coverage is displayed in test coverage report? ● Compare Static and Dynamic coverage analysis results for test coverage and code coverage and draw conclusions At the end, on the basis of test coverage result, static test coverage analysis is compared with dynamic test coverage execution on the basis of following criteria to draw the conclusion. 1. All metrics test coverage derived from model of API testing 2. Additional metric test coverage by static test coverage analysis. 3. Number of tests written and the time taken to execute for static and dynamic test coverage techniques. 4. Advantage or/and Disadvantage of both test coverage techniques. 1.5 Report Structure Further chapters of this dissertation are organized as: Chapter 2: This chapter discusses the background of industrial work (on which the problem is based). Chapter 3: This chapter discusses the context of the problem in detail and previous work done.
  • 13. 13 Chapter 4: This chapter describes the analysis of tools used in the testing and measuring the test coverage. Chapter 5: This chapter includes all the details on my project, package and classes which I had written to generate the result (functionality and a description of classes) including use case diagram and class diagram and its description. Chapter 6: This chapter explains the experiments done and the result generated from the experiments. It includes experiments and results of all the metrics derived using the API model. Chapter 7: This chapter presents the evaluation of my work done. It explains the reason for all the result generated using both static test coverage analysis and dynamic test coverage execution. Chapter 8: This chapter discusses the conclusion and future work to be done. References: It contains all the proper citations for all reference I have used in this project. Appendix: It is divided into 4 parts as follows: 1. It discusses all the terminology. 2. It discusses the sigtest (static tool for test coverage generation) usage. 3. It discusses the apache ant usage. 4. It discusses the use of testing (testNG) framework.
  • 14. 14 2. Background of Industrial Work This section explains the Business motivation for this project. It explains the background detail of WSAPI testing and need for measuring the test coverage. The Oracle Virtual Machine’s (OVM) WS-API provides a Web Services interface for building applications with Oracle VM. SOAP, REST-XML and REST-JSON [1, 2, 3, 4, and 5] are the protocol specification they used for exchanging structured information in implementing Web Services in computer networks. These protocols offer the same functionality but the choice of use of the protocol depends on the client implementation. A client program can use only one type of API interaction either SOAP or REST-XML or REST-JSON depending on the day of the week. Mixing of both of these protocols is not supported and may cause unexpected results. OVM [37] uses this WS- API to communicate with the Oracle Virtual Machine Manager (OVMM) [37] with the client code. There are three ways of communicating with the OVMM and client code using WS-API as explained below. Figure 2 explains the same in pictorial form. 1. Command Line Interface (CLI): It is one of the approaches used by client to communicate with the OVMM using WS-API. The CLI is accessible using an SSH client which uses the WS-API [37]. 2. Graphical User Interface (GUI or UI or Web UI): It is second approach used by client to communicate with the OVMM using WS-API. The GUI is accessible using a web-browser which uses the WS-API [37]. 3. WS-API Java Tests: This is the last approach to communicate with the OVMM using WS-API. The teams were involved in porting the python code into java code using a testing framework known as testNG (testing Next Generation) [27]. The aim of writing the test cases is to test the API and core functionality provided by WS-API and also to communicate with the OVMM [37]. Figure 2: High level description of the Oracle Virtual Machine (Diagram adapted from internal oracle document) [37]
  • 15. 15 The WS-API allows client to write applications to customize, control, and automate OVM. For example, it allows to [37]: 1. Create, start, restart and stop virtual machines, and migrates them between different OVM Servers. 2. Create Server pool of server(s) and other operations on server(s). 3. Expand the capacity of the OVM environment by adding more OVM Server(s) and storage providers. 4. Integrate OVM with other third-party tools such as monitoring applications OVM is a platform which provides a fully equipped environment with the latest technology and benefits of virtualization. OVM offers clients to deploy OS (operating systems) and application software in a virtualization environment. OVM insulates users and administrators from the underlying virtualization technology and allows daily operations to be conducted using goal-oriented GUI interfaces [37]. The main components shown in figure 3 are discussed below in brief: 1. Client Applications - OVMM are provided either using the GUI (Web UI) accessible using a web-browser, or CLI. Both are accessible using an SSH client that use the WS- API. All communications with OVMM are secured using either a key or certificate based technology. 2. Oracle VM Manager: It is used to manage OVM Servers, Virtual Machines (VM) etc. It also allows managing infrastructure directly using the command line. Each of the interfaces runs as a separate application to the OVMM core and interfaces with this using the WS-API. It is also accessible using the Web UI. OVMM can be on a standalone computer, or part of a Virtual Machine running on an instance of OVM Server. All actions within OVMM are triggered by OVM Agent by using the WS-API. 3. Oracle VM Manager Database: It is used by OVMM core to store and track configuration, status changes and events. MySQL Enterprise database is supported by OVMM. The database is configured for exclusive use of OVMM and must not be used by any other applications. It is automatically backed up by backup manager on a regular schedule, and facilities are provided to perform manual backups as well. 4. Oracle VM Server: OVM Server is installed on a bare metal computer, and contains the OVM Agent (ovs-agent) to manage communication with OVM Manager. It has 2 domain as follows:- a. dom0 (domain zero) – It is management or control domain with privileged access to hardware and device drivers. b. DomU (User domain) – It is an unprivileged domain. It does not have direct access to the hardware or device drivers. It is started and managed on an OVM Server by dom0. One or more OVM Servers are clustered together to create server pools. This allows OVMM to handle load balancing and failover for high-availability (HA) environments. Virtual Machines (VM) run within a server pool and can be easily moved between all servers in server pool. Server pools also provide logical separation of OVM Servers and Virtual Machines.
  • 16. 16 5. External Shared Storage: It provides storage for many purposes. It is also important to enable high-availability (HA) options with the help of clustering. OVMM helps in achieving discovery of storage and its management, which then interacts with OVM Servers using storage connect framework. OVM server then interacts with storage components. OVM provides support for a variety of external storage types including NFS, iSCSI and Fibre Channel [37]. Figure 3: Product Architecture (Diagram adapted from oracle document) [37]
  • 17. 17 The product architecture shown in figure 3 allows the client code to perform the following operations. And many other operations can be performed on each of them. This is a high level description of it. 1. Discover Servers: OVM Server can be installed on either x86 or SPARC hardware platforms. There is an OVM Agent, also knows and ovs-agent, is installed on each OVM Server which is a daemon that runs within dom0 on each OVM Server instance. Its primary role are : - Facilitate communication between OVM Server and OVM Manager. - Carry out the entire configuration changes required on an OVM Server instance, in accordance with the messages that are sent to it by OVM Manager. - OVM Agent is also responsible for starting and stopping virtual machines as required by OVM Manager. OVM Agent also maintains its own log files on the OVM Server that can be used for debugging issues. 2. Discover Storage: Storage in Oracle VM can be provided using any of the following technologies: - Shared Network Attached Storage - NFS (Network File System). - Fibre Channel SANs connected to one or more host bus adapters (HBAs). - Local disks. - Shared iSCSI sANs: abstracted LUNs or raw disks accessible over existing network infrastructure. 3. Create Server Pool: A server pool is a required entity in OVM, even if it has only one OVM Server. Usually one server pool has several OVM Servers, and an OVM environment may contain one or more server pools. Server pools can be clustered and non-clustered both, but typically they are clustered. In all version of OVM before 3.4, there was a Master server, which was responsible for centralized communication with the OVMM. If it is necessary, then any other server in server pool can take over the Master role. But in OVM 3.4.1 or later, there is no concept of master server anymore. A virtual machine (VM) can be moved from one OVM Server to another without an interruption of service, because server pools have shared access to storage repositories. 4. Create Repository: A repository can also be created on a local storage disk, with the exception that the data within such a repository would only be available to virtual machines running on the OVM Server to which the disk is attached. After completion of storage phase, the next task is to create a storage repository. When the storage repository is created, OVM Manager can take ownership of it, and OVM Servers selected during the creating process have access to the repository in order to store VM, ISO files, templates etc. 5. Create Network: The network infrastructure Of OVM environment establishes connections between the following: - OVM Servers within the environment. - OVM Server and storage subsystems that it uses.
  • 18. 18 - OVM Manager and all OVM Servers in the environment. - Virtual Machines (VM) running in a server pool. The entire network connections discussed above can use the features supported by OVM to maximum advantage. The feature includes NFS (networked file systems), clustering, redundancy and load balancing, bridging, and support for Virtual LANs (VLANs). 6. Create Virtual Machine (VM): The entire Oracle VM environment is designed for the purpose of running and deploying virtual machines. They are, therefore, a significant component within the Oracle VM architecture. Virtual machines offer different levels of support for virtualization at the level of the operating system, often affecting performance and the availability of particular functionalities afforded by system drivers. VM can be created from different types of resources like a template, an assembly (which contains preconfigured VM), or from scratch using an ISO file (image). The creation of a VM using a template uses cloning technique: the template is imported (as an archive in zip format), unpacked and then stored as a VM configuration file with images of its disks. This is now cloned to create a new instance in the form of a VM. Similarly, an existing VM can be cloned to create a new VM, or a new template (which can be further used to create a VM).
  • 19. 19 3. Related Work This section discusses the background details related to Web Services Application Programming Interface (WSAPI), software quality, API testing and test coverage. It discusses two approaches of measuring the test coverage report of API and core functionality provided by WSAPI. Those test coverage approaches are static test coverage analysis and dynamic test coverage execution. It also discusses how both test coverage approaches can be combined together. 3.1 Software Quality Measurement Today whole world runs on software, every single business depend on the use of software. Keeping this in mind, in the world of competition, quality of software really matters. Every company makes sure that their clients are using the better quality of product. Software quality measurement has always been an important step in the progress of software development. Software metrics is very important in making quantitative or qualitative decisions in reducing the risk assessment in software. Addison Wesley has defined the term software quality as: The degree of conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software. [10] Software quality measurement is a process of collecting and analyzing data for software. Characteristics of a good software quality [8] are: ● Reliability - It is the ability of a system to perform its required functions under stated conditions for a fixed amount of time. ● Sensitivity - It is the variability in responses when it exists in the stimulus or situation. ● Validity - It measures what it intent to measure. There are three aspects of software quality [38] as explained below:- 1. First aspect is functional quality. It means that the software performs the tasks efficiently, it is instructed to do. 2. The second aspect is structural quality, which means that the code should be well structured. Structural quality is generally hard to test. 3. The third aspect is process quality, and it is also critically important. It is the process involved in developing the software. All three aspect of the software is equally important and should be tested or implemented properly to improve quality and get a better product.
  • 20. 20 3.2 Introduction to WSAPI Web Services are client and server applications that communicate over the World Wide Web’s (WWW) Hypertext Transfer Protocol (HTTP). According to World Wide Web Consortium [5, 9] standards, Web Services provides a standard means of inter-operating between software applications running on a variety of platforms and frameworks. Web Services can be combined in a loosely coupled way to achieve complex operations. There are two types of Web Services, restful and restless. RESTful are those which uses REST (Representational State Transfer) on the other hand RESTless are those which use SOAP (Simple Object Access Protocol). Today many applications uses web application, for example, in Gmail [4] the JavaScript client program invokes the web APIs to get the mail data. WSAPI has become an important tool for web developers and also becoming an effective marketing tool for many types of businesses. Some of the most popular and widely used Web Services API and its increase in the use over time are shown in figure 4 and figure 5 respectively [12]. Model-based techniques have been used to build a test model to get a better quality service and reliability. These test models are used to derive the test cases for testing API and core functionality provided by Web Services. Tester can easily update these models and can re-generate test suite for any changed requirements, avoiding error-prone manual changes [5]. Figure 4 - Popular Web API Categories Figure 5 - APIs Usage over Time 3.3 API Testing A key to testing software is ensuring their functional quality, because when a set of services is combined together, it may introduce many more opportunities of error or failure. Software testing is an essential activity in measuring software quality and its maintenance. Software testing also involves the generation of test coverage report. These reports contain information of what has been tested and what has not tested. For a bigger software project, it’s hard to remember what functionality has been tested or what is left. Hence the test coverage report helps testers to observe the area which requires further testing and to prioritize their testing work in those areas which are more important to test first.
  • 21. 21 API testing is different from other software testing. They differ in metrics to be tested. Generally software testing is used to observe some of the common metrics [28] like branch coverage, statement coverage, MC/DC coverage, DU pair coverage etc. One the other hand API testing is more of testing the application end to end, service by service, and interface by interface. An API testing can be compared with the GUI testing. The difference between the two is that the GUI has a user-friendly collection of windows, dialog boxes, buttons, and menus. On the other side the API consists of a set of direct software links, or calls, to lower- level functions and operations [13, 14]. Developers test API by writing kernel level [40] unit tests. Unit testing provides the test coverage by checking the outputs of API on different combination of inputs. On the other hand functional or system testing [40] invokes API call indirectly by running tests cases with customers input (how customer use the system). 3.4 Test Coverage Testing can also produce test coverage report. Test coverage report helps in evaluating the effectiveness of testing by providing data on different coverage items. According to the International Software Testing Qualification Board (ISTQB) [7]: “Test coverage is the extent that a structure has been exercised as a percentage of the items being covered. If test coverage is not 100%, then more tests may be designed to test those items that were missed and therefore, increase test coverage”. Test coverage can help in monitoring the quality of testing. It also assists in directing the test case generators to generate test cases for those areas which need further testing. They are also known as the gaps in the software testing, i.e. the area which has not be tested. It also provides the user with information on the status of the verification process [21]. Test coverage also helps in regression testing, test case prioritization, test suite augmentation and test suite minimization. There are two ways of evaluating the test coverage of the software. First one is static test coverage analysis and the second one is dynamic test coverage execution. There are many dynamic test coverage tools (like jococo, eclemma, clover, Emma etc.). The following section briefly reviews some facts about static test coverage analysis and dynamic test coverage execution [29]. Static analysis and dynamic execution arose from different communities and evolved along parallel but separate tracks. Traditionally, they have been viewed as separate domains, but sometimes can be combines together to generate better result. Both of the test coverage approaches are discussed briefly in the following section: 3.4.1 Static Test Coverage Analysis Static test coverage analysis analyzes the given code and reasons about all possible behaviors that might arise at run time. Static test coverage analysis can also be used to prove the absence of run-time errors using a verification tool like Polyspace verifier [36]. Verification is a technique that may be used any time. It doesn’t require any prior knowledge of the code to be analyzed. Compiler optimizations are standard static analyses. According to the MIT Lab for computer science, static analysis is conservative and sound [29, 31]:
  • 22. 22 Soundness guarantees that analysis results are an accurate description of the program’s behavior, no matter on what inputs or in what environment the program is run. Conservatism means reporting weaker properties than may actually be true; the weak properties are guaranteed to be true, preserving soundness, but may not be strong enough to be useful. A sound system is one that proves only true sentences, i.e., iff provable (p) => true (p) [33]. It means a statement is true if and only if it proved to be true. Static test coverage analysis means obtaining the test coverage without actually executing the piece of code [33]. Tools like java and Spec# [18, 19] have been very useful for small and simple programs to detect runtime errors without actually running the program. They have proved the importance of using an intermediate language (for e.g. Boogie) [17, 18, 19] for static verification of code. Static test coverage analysis can also be used to prove program’s incorrect segment i.e., to find bugs in the program. One approach for estimating test coverage analysis using static analysis is explained in the following section. This work was carried out by Tiago L. Alves and Joost Visser, Software Development Group, Netherland [30]. They introduced a technique of slicing of static call graphs of method, class and package to estimate the test coverage and validate the results obtained using dynamic tool. The steps are described below [30]:- 1. Graph construction: A graph is derived to represents packages, classes, interfaces, and methods, and the relations between them. A graph is usually represented by a mathematical formula stated below [30]: G = (V, E) Where, G = Graph to be drawn V = Set of vertices (nodes in graph) E = Set of edges between these vertices (relationship between them) The packages (P), classes (C), interfaces (I), and methods (M) makes the vertices (V) of the graph. Therefore, set of vertices (V) is represented as Nn ϵ V where n ϵ {P, C, I, M}. Edged of the graph are formed by the following members: a. DT – To define class or an interface b. DM – It defines the methods. c. DC – It states direct call of class or method d. VC – It defines a virtual call made between Hence, the set of edge is expressed mathematically as E ⊂ {(u, v) e | u, v ϵ V}, where e ϵ {DT, DM, DC, DV}. All these terms stands for the following [30]: 2. Count methods per class: Based on the above definitions following metrics is measured: a. Number of covered methods per class (CM) is calculated by counting the numbers of outgoing define method edges where the target is a method contained in the set of covered methods. b. Number of defined method per class (DM) is calculated by counting the numbers of outgoing define method edges per class.
  • 23. 23 3. Estimate static test coverage: Now using the above two metric, one can calculate some of the basic metric like a. Class coverage: It is the ratio between of covered members and defined methods. It can be mathematically expressed as [30]: Class Coverage (in %) = (∑ Covered Member/ ∑Defined Member) × 100 b. Package coverage: It is the ratio between the total number of covered and defined in all the class of the package. Mathematical expression is [30]: Package Coverage (in %) = (∑ Covered Member / ∑ Defined Class) × 100 The result obtained from static estimation answers the question that “can test coverage be determined without actually running tests?” It was observed, that static test coverage not only used as predictor for test coverage, but it also detects coverage fluctuations. 3.4.2 Dynamic Test Coverage Execution Dynamic test coverage execution operates by executing the given piece of code and observing the behavior [31, 35]. There are dynamic test coverage tools (like jococo, eclemma, clover, Emma etc.) [25, 26] which generate the test coverage report. Theoretically and experimentally dynamic analysis is precise because there is no approximation in the process of test coverage report generation [31, 33]. Dynamic test coverage execution examines the actual, exact run- time behavior. An analysis is “dynamic” if it emphasizes control-flow accuracy over data-flow richness/generality [33] Dynamic analysis of program provides concrete reports of errors, which makes it easier to debug [32]. Directed automated random testing (DART) [11, 32] has introduced the concept dynamic verification with lightweight static analysis. The major challenge involved in the generation of test coverage by dynamic execution is selecting a representative set of test cases (inputs to the program being analyzed) [29]. Dynamic execution can be used even in situations where program semantics (but not perfect program semantics) are required. As a result, and because of its significant successes, dynamic execution is gaining credibility. The major drawbacks of dynamic test coverage execution may not be generalized for future execution. There are many tools which can generate the test coverage result using dynamic approach. In this project I have used eclemma [25, 26]. It is an eclipse plug-in which is installed and can be used to generate test coverage report by running the given piece of code. 3.5 Combined Study Static test coverage analysis and dynamic can enhance each other. Both techniques can always be applied to generate a better result than they can produce individually. Only testing on its own (does not guarantee completeness) or verification on its own (may guarantee completeness but not deal with implementation) will not be enough to guarantee software quality. Runtime monitoring of software are mainly used for profiling, performance analysis,
  • 24. 24 software optimization as well as software fault-detection, diagnosis, and recovery [31]. Both the approaches are studied individually, and then compared with each other and lastly can be combined together in such a manner so as to obtain better and efficient result. Static analysis is the one which verify a program for all possible execution paths before the actual execution. Dynamic execution verifies the system’s properties by executing the program. Dynamic execution can detect potential faults at runtime. It maps the abstract specification to the actual concrete implementation. The soundness nature of static analysis can benefit the efficient and precise dynamic test coverage execution process. Combining both approaches can help the developers in identifying errors and test-case generation because static analysis provides the structure of the program. Thus it becomes easier to produce test cases which cover most of the program paths. Also both methodologies operate on program and specification. Hence, both can be mixed together in a way shown in figure 6. It means that if we mix the program with the specification and then perform static analysis of the code. After static analysis, the result obtained can be used as an experience for the dynamic test coverage execution. It can be used to generate test cases for dynamic test coverage execution. Figure 6: Combine Use of Static Analysis and Dynamic Execution All the topics discussed in this chapter is very useful and necessary to understand the technical background and research work in the field of static test coverage analysis and dynamic test coverage execution. It also is used for me to understand the basic principle on the both approach operates. Concepts of API test and model of GUI testing helps to develop the model of API testing. The paper on verification tool like spec sharp is used for program verification and to investigate the runtime error without actually running them. Test coverage and both approaches are used to generate the test coverage metrics and compared them. It also helps to understand the concepts of testing and test coverage. This information can be useful in performing API testing and then generating test coverage by static and dynamic techniques.
  • 25. 25 4. Tools and Techniques Used This chapter discusses the description of all tools and techniques used in testing and test coverage report generation. It contains 4 sub sections as follows. 1. First section discusses the tools and techniques used for generating test coverage result using static tool. 2. Second section discusses about the tools and techniques used for generating test coverage result using dynamic tool. 3. Third section discusses testing tool used for testing. 4. Fourth section discusses an eclipse [25, 26] plug-in to generate the class diagram. Before discussing all tools used, it is very important to explain the relationship between all the tools. Figure 7 shows the relation between all the tools used. This can be helpful to understand which tool has been used and why they have been used? Figure 7: Diagram showing the relation between tool and code
  • 26. 26 The diagram shown in figure 7 explains how all the tools and code are related to each other. It starts with designing use-case and class diagram. Firstly source code (including the API and core functionality) was written with using class diagram and use case diagram generated by the Unified Modeling Language (UML) tool (Star UML). After code development, spec sharp [17, 18, 19 and 20] uses the source code for formal verification (to remove runtime error). This is important before generating result using static test coverage technique. The reason is that codes are not executed in static test coverage analysis and hence it is required to remove run time error before test coverage report generation. Then test classes were written using testing tool. Static and dynamic test coverage tools use test class to generate test coverage result. Static test coverage tool uses apache ant to generate test coverage result. Succeeding section discuses the tools used. 4.1 Static Tool This section discuss tool used to generate test coverage result using static test coverage technique. It is further divided into 3 sub sections as follows. 1. First section discusses the use of sigtest user guide to generate signature file (it is a file which contains all the information about class and its members) and then use that signature file to generate test coverage report. 2. Second section discusses the use of apache ant to generate test coverage report. Static tool uses command line argument; hence apache ant is used to generate test coverage report by using all the command line instructions in a single file. 3. Third section discuses spec sharp, a tool to determine runtime error without actually executing the code. 4.1.1 Sigtest User Guide It is a tool which generates test coverage using static analysis. It is actually a jar file which is used to generate test coverage in two simple steps. Signature test tool operates from command line to generate signature files. It can also be used to manipulate the signature file depending on the user requirement. A signature file is a textual representation of all public and protected members of an API. There is two simple way to generate the coverage of the API methods as discussed below. 1. Generate the signature file of APIs under test. 2. Use this generated signature file to work out test coverage of APIs. Both these steps are described below along with the syntax and logic. 4.1.1.1 Generate the signature file The signature file can be generated by the following syntax [22]. The principle on which it works is explained in the section after the syntax. Java -jar sigtestdev.jar Setup [options]
  • 27. 27 Table 4.1 describes the available command options used and the values that they accept. For more detail please refer to appendix B Table 4.1: Setup Options DescriptionOptional / Required Option Path to one or more APIs under test.Required-classpath path Name of the signature file to be created.Required-FileName file_name. It specifies a class or package to be included in the signature file. The -package value acts as a filter on the set of classes specified in -class path. Optional-package package_or_clas s_name The sigtestdev.jar file checks for the given options in the syntax and generates signature file. It read all the jar files provided in the class path. Then it filters out packages and classes specified in –package option and generate a file with all the public and protected members. The file generated has the same name as provided in the syntax by –filename option. There is more option to be used, but all of them are optional and hence can be used depending on the user requirements. The signature file thus created is shown in figure 8 and explained after figure. Figure 8: A small snippet to show the content of signature file Figure 8 shows a small portion of the signature file generated. The first two lines is the header. #Signature file v4.1 means that the file is a signature file and the version of the file is 4.1. The version has special meaning and it depends on the version of jar file used. There are many other versions (refer to appendix B). The second line is #version. This can be set to any number by an option –version. (For more detail of use please refer to Appendix B). After these two lines signature file contains the information about every class and its members. Every class is declared with a key word CLSS. Every public and private method is described by meth. The default and parameterized constructor is declared as cons. Enumeration is declared by inner keyword. Super class is declared as supr. The private data members of the class are declared as
  • 28. 28 hfds. All the method name and the class name are fully qualified names. For example in figure 8, meth public wsapi.Server killServer (wsapi.OvmWsClient, wsapi.Server) throws java.lang.Exception is a method. It returns an object of Server Class of wsapi package. It accepts two arguments and throws an exception. 4.1.1.2 Generate the coverage report This signature file created is used to generate test coverage report. The test coverage report can be generated by this syntax given below [22]. The principle on which it works is explained in the section after the syntax. Java -jar apicover.jar [options] Table 4.2 describes the options which have been used to generate test coverage report for this project work. For more detail on usage please refer to appendix B. Table 4.2: Test Coverage Options DescriptionOptional / Required Option Path to test classes.Required-ts path Exclude all data members from all classes and interfaces.Optional-excludeFields Path of the signature file to be used.Required-api path/filename Recursively include classes from the API package.Optional-apiInclude packageName Specifies the mode of test coverage calculation as w for worst case or r for real world. Defaults to worst case. (refer to coverage analysis mode) Optional-mode [w | r] Specifies the level of report detail as an integer from 0–4. Defaults to 2 (Table 4.3 - Report File Contents Levels of Detail ) Optional-detail [0,1,2,3,4] Specifies the report format as plain text or XML. Defaults to plain. Optional-format [plain|xml] Path to place the generated report file. Defaults to standard out. Optional-report path/filename The apicover.jar file is used to process the content signature file and the test classes specified to generate test coverage report. The –ts option (from the table 3, or refer to appendix B for
  • 29. 29 more detail) requires the path of the test classes. The apicover.jar reads the members (public and protected) of every class from the signature file and checks if it has been referenced in the test classes or not. If the member is been referenced in test classes then it marked that member as tested. This process keeps on going for all the member of every class present in the signature file. After this it has total member and tested members. The apicover.jar has a class which calculated the percentage of test coverage by the given mathematical formula:- Test coverage (in %) = (Tested member/Total member) × 100 After this there are other options to display the test coverage result. For example to redirect the test coverage report to a file, one needs to specify it using –report option. The apicover.jar checks for this and redirect the output to this file. Similarly the apicover.jar checks for every options and execute the result according to the options in the syntax (for more detail please refer to appendix B). Coverage Analysis Modes The API Coverage tool uses these two modes of analysis [22]:  Real World Mode: In this mode, the report file only contains the members of the class of API under test. The class java.lang.object is the root of class hierarchy. The member function of this class (like clone(), equals (), finalize(), hashCode(), toString()) is not included in real mode. Hence in real mode we get accurate result.  Worst Case Mode: In this mode the test coverage report includes all the methods (like clone(), equals(), finalize(), hashCode(), toString()) in each class mentioned in the signature file. Hence unlike real mode it reduces the overall test coverage percentage. Table 4.3 - Report File Contents for Levels of Detail Covered Members Members Not Covered Class Summary Package Summary Detail Setting X0 XX1 XXX2 XXX3 XXXX4 Table 4.3 [22] explains detail level used for generating the result. The detail means what information to be included in test coverage report file. For example, to see test coverage of package, class, tested member and members not tested in the test coverage report file, the syntax needs to include –detail 4 (refer to appendix B). Similarly for all detail levels, report can be generated as per user requirement.
  • 30. 30 4.1.2 Apache Ant Static test coverage analysis does not require execution of test classes. It also involves more than one instruction to generate test coverage report and also require lots of arguments per instruction. To avoid rephrasing all instruction every time to generate a report, it is recommended to use apache ant [23, 24]. Apache Ant is a command-line tool which is used to execute instructions using build files. The main known usage of Ant is in Java applications. Ant supplies a number of built-in tasks allowing to compile, assemble, test and run Java applications. Apache ant [23, 24] is extremely flexible and does not have any coding conventions or directory layouts for Java projects. 4.1.2.1 How to write a Simple Build file Ant's [23, 24] build files are written in XML. Each build file contains one project and at least one (default) target. Targets contain task elements. Each task element of the build file can have an id attribute and can later be referred to by the value supplied to this. The value has to be unique. For more detail to write a simple build.xml file, please refer to appendix C. 4.1.3 Spec Sharp Static test coverage analysis does not allow the execution. There may be lots of errors which may get reflected at run time and not at compile time. In that case it may throw an exception and does not generate the correct output. Hence to avoid this problem there are tools to verify run time check and to remove runtime errors without actually executing. One of such tool is spec sharp. Spec Sharp (also represented as Spec#4 ) is a programming system that facilitates the development of correct software. The Spec# language extends C sharp (C#) with contracts. The spec sharp tool suite consists of a compiler that emits run-time checks for contracts, a static program verifier that attempts to mathematically prove the correctness of programs [17, 18]. The diagram in figure 9 shows how spec sharp programming system works [20]. Figure 9 explains how spec sharp is used to generate the run time errors from a code without actually running the code. When, a code is written in the spec sharp compiler. The compiler generates the byte code of the given piece of code. Now this generated byte code gets translated into boogie language by the translator. There is a verification generator to generate all the verification condition which can be verified at run time. The Verification Condition (V.C.) generator generates logical verification conditions from a Spec# program. After this it uses an automatic reasoning engine (e.g. Z3 [20]) to analyze the verification conditions proving the correctness of the program or finding run time errors. This automatic reasoning engine is known as SAT [20] or SMT (Satisfiability Modulo Theories) solver. Z3 is the name of Microsoft's SMT solver that implements a decision procedure for the formulas generated by Boogie. 4 Online compiler at www.rise4fun.com/specSharp [19].
  • 31. 31 Figure 9: Flowchart to show how spec sharp works to generate the run time errors [20] Spec sharp has 3 main method contracts as given below: 1. The precondition expresses the constraints under which the methods operate properly. It is represented as “requires” keyword. 2. The postcondition expresses what will happen when a method executes properly. It is represented as “ensures” keyword. 3. The loop invariant which has to be true before and after the loop. It is represented as “invariants” keyword. Figure 10 - Spec sharp example [20] Figure 10 shows an example using all the method contracts. It is a method which takes an integer as an argument and returns the square root of that integer value. The precondition of the methods is that integer value should be greater than or equal to zero. The postcondition says
  • 32. 32 that result*result <= x && x < (result+1)*(result+1). It means that the square root of the given value is always 1. Less than or equal to the value itself (result*result <= x), and 2. Less than the square of the successor of its square root (x < (result+1)*(result+1)). Let’s explain this by taking two different values. First let’s take x = 0. When x is 0 its square root is 0. Hence the post condition becomes 0 <= 0 && 0 < 1 which is true. Let’s take some other value greater than 0 for e.g. 4. On taking x = 4, the postcondition becomes 4 <= 4 && 4 < 9. Hence the postcondition is true. Now invariant says that r*r <= x. Lets explain this also by taking two different value. First value is 0. When x = 0, it checks the while loop. While loop executed is (r+1)*(r+1) <= x. When x= 0 then this condition becomes false and “r” remains 0 before and after the loop hence the invariant holds true for 0*0 <= 0. Now let’s take another example x = 4. Loop Sequence value of ‘r’ before entering loop value of x Condition of loop (r+1)*(r+1) <= x value of ‘r’ after exiting loop 1 0 4 1<= 4 , true r++ , 1 2 1 4 4 <= 4, true r++, 2 3 2 4 9 <= 4, false 2 At the third iteration, it exits from the while loop as the condition become false. Hence the invariant holds true before and after the loop. 4.2 Dynamic Tool Dynamic test coverage report is generated by executing the code on a real or virtual processor. For a dynamic test coverage execution to be effective, the target code must be executed with sufficient and necessary inputs to generate expected output. Dynamic test coverage execution demonstrates the presence of errors (but not the absence). Dynamic test coverage, eclemma has been used for this project work. The following section is a brief discussion of eclemma tool. 4.2.1 Eclemma Eclemma5 [25, 26] is an open source Java code coverage tool for Eclipse. It generates test coverage result into the eclipse workbench. While generating the test coverage result, testers always focus on two aspects to analyze the coverage result: 1. Coverage Overview: The coverage view lists test coverage summaries of Java projects, packages, classes, method etc. It can also be customized to show result according to the user requirement. The figure 12 shows the coverage overview. 5 The eclemma plug-in for eclipse can be downloaded from http://update.eclemma.org/.
  • 33. 33 Figure 11: Coverage view on the basis of different counters As shown in figure 11, top right corner has other counters like instruction counter, branch counter, line counter, method counter, etc. It also allows expanding the projects to observe the test coverage of a package, class and their methods. For example there is a config package which has 3 classes. One of its classes is LAB.java. On expanding this class one can see member functions and it test coverage. 2. Source highlighting: The result of a coverage session is also directly visible in the Java source editors. A customizable color code highlights fully, partly and not covered lines [25, 26]. Figure 12 shows the source highlighting of the source code of a particular class. Green color shows the fully covered lines, yellow shows the partially covered line and light red color shows the lines which were skipped. The lines which are partially covered are usually a branch. A statement can either be fully covered or not covered. Figure 12 - Source highlighting to see partially covered, fully covered and skipped lines
  • 34. 34 3. Customize your result :- Eclemma is a open source hence it allow additional features to generate test coverage according to user requirement: ● Different counters: Several metric to select like lines, instructions, methods, branches, types or cyclomatic complexity to see the test coverage result. ● Multiple coverage sessions: One can switch between different test coverage sessions. It allows running more than one test coverage session. ● Merge Sessions: It allows merging all test coverage session. ● Export the result: Coverage data can be exported in HTML, XML or CSV format or as JaCoCo execution data files (*.exec) which can be use further. ● Run configuration: we can also customize the report to be generated. By default eclemma has a default run configuration which shows the test coverage of ‘src’ folder and ‘test’ folder. But there is no need to be informed about the test coverage of test classes. Test classes are meant to get the coverage of source code or any external jar files used. Eclemma allows customizing the run configuration in eclemma to get result according to user. Figure 13 show how to changing run configuration in eclemma. It is defining a new run configuration which consists of the test coverage result of external jar file and source code. For this project the external jar file consists of necessary classes for API and core functionality. Figure 13: Customize your test coverage result according to requirement
  • 35. 35 4.3 Testing Tool TestNG [27] is a testing framework inspired from JUnit and NUnit but it introduce some new functionality that make it more powerful and easier to use. TestNG is designed to cover all categories of tests like unit, functional, end-to-end, integration, etc. In test classes following annotation have been used for experiment purpose. For more details on use please refer to appendix D. 1. @BeforeClass: - It allows the method to run before the first test method in the current class is invoked. 2. @BeforeMethod: - It allows the method to run before each test method, usually to set up environment before every test. 3. @AfterMethod: - The annotated method runs after each test method, usually to clean6 environment after every test. 4. @AfterClass: - The annotated method runs after all the test methods have been executed. 5. @Dataprovider: - It is used provide data for a test method. The method must return an Object [][] where each Object[] can be assigned the parameter list of the test method. 6. @Test: - Marks a method as test. 4.4 Unified Modeling Language Tool For the class design, use case diagram, package description diagram etc., a tool called Star UML has been used. Unified Modeling Language (UML) is a general-purpose modeling language, which offers a standard way to visualize a system. There is an eclipse plug-in called Object Aid7 which allows creating class diagram in eclipse. Figure 14: Process to create a file using Object Aid eclipse for Class Diagram 6 Clean means deleting the previous setting used in the last test. 7 It is available online at http://www.objectaid.com/installation.
  • 36. 36 Figure 14 shows how to create the file to make class diagram or sequence diagram. It also allows exporting it as an image for further use. It is a very simple tool to use with 100% accuracy. After creating the file right click anywhere on the file, select add and then select java classifier. It allows adding all the java classes, interface etc. to include in the class diagram. It also allows organizing the layout of the diagram automatically. Figure 14 and 15 is the pictorial representation of all this steps. Figure 15: Create Class Diagram Using Object Aid Plug-in for Eclipse
  • 37. 37 5. Project Overview This chapter discusses the code structure, written for the experiments. The project is related to industrial problem of measuring the test coverage of API and core functionality of WSAPI. So to perform the experiments, representative codes were written. Representative means is it similar to the work in the industry but it is a small portion of what was been practiced in the industry. For example this code contains all the code related to server class and instead on using real servers, the experiment is done using objects of classes. The test case selections were also made on that basis of the specification used at industry. All the work is similar to the work done in industry but it is like 1% or 2% of the total work and hence termed as representative work. The reason for writing codes related to server class is that I started working in company with server class and I was familiar with this class and its related operation. So I implemented API and core functionality for thesis work related to server class only. 5.1 Representative Code Structure Figure 16 shows the list of all java files, report files, build.xml files written to perform the experiments. The project folder name is dissertation and it contains 6 main folders and 2 files. The brief description of all of these is in the section below: 1. src: - This folder has 5 important packages, which contains the source code for API, core functionality and other utility classes. The detail description of the functionality of every class in all the packages is explained in later section of this chapter. 2. tests: - This folder includes all the test classes written for testing API and core functionality. 3. lib: - This folder contains external jar files which is been used in the project, like sigtestdev.jar (for creating signature file) and apicover.jar (for generating test coverage report) using static test coverage analysis. This file also contains rt.jar file (it is run time environment file, which contains all of compiled class files for the base Java Runtime environment). 4. reports: - This folder contains all signature file and test coverage report generated by static test coverage report using apache ant. 5. test-output: - This folder is automatically generated by testNG framework. It contains html pages related to result, like passed test, failed test, skipped test, time taken to run the test. This can be helpful in publishing result on a web link. 6. UML-Diagram: - This folder contains the class diagram. This class diagram is created by the Object Aid eclipse plug-in. 7. build.xml: - The build.xml file is an ant script. This file is used to generate test coverage report by static test coverage analysis. 8. build.properties: - This file is the property file for build.xml file. It contains the short name for path calculation which is been used in build.xml file. Then build file contains path specified in this file.
  • 38. 38 Figure 16: java file list created for the dissertation
  • 39. 39 5.2 Use Case Diagram Figure 17 shows the high level use case diagram that product is supposed to do. The main functionality is the communication of client code with OVMM using WSAPI. Client code gets login to OVMM by uses three communication protocols (SOAP, REST-XML, and REST- JSON). After login it is allowed to perform any the following operation at higher level. They can also mix all these operation in any random ways. All the operations are shown in figure 17 and explained after the figure. Figure 17: Use Case Diagram
  • 40. 40 The figure 17 shows all operations that a client can do using core functionality and API. After login client can perform any of the following operation in any random sequence. 1. Discover Servers, 2. Create Server Pool, 3. Create Storage (Network File System (NFS), iSCSI, Fibre Channel (FC)), 4. Create Network, 5. Create Repository, and 6. Create Virtual Machine. All the above operations are very high level description of operations to be performed by the client code. For example after discovering server, client can modify its properties, like setting NTP8 server(s), start server, kill server, restart server, stop server, restart server, delete server etc. 5.3 Class Overview/Diagram This section describes all class inside all packages. It has five important packages with some Java classes in each package. The brief detail of each package and their respective classes are shown in figure 18 and explained in the section after figure. Figure 18: Important Packages for my dissertation 8 It is a short form of Network Time Protocol. It is a term used to define networking protocol for clock synchronization between different computers over packet-switched, variable-latency data networks.
  • 41. 41 Figure 18 shows 5 important packages required to execute experiments. Brief discussion of all of them is in the section followed. 1. Utils: - This package contain the following classes: - JobUtils.java - This class contain all method which performs a job (mean a function). Every method/function in the core functionality returns a job. This class contains is a wrapper function for the all the method of the core functionality. It allows the core functionality to finish the job by waiting for some time. For example to discover a server, it goes in a loop until the server is discovered and keeps on displaying the message on the console. In the mean time the core functionality discovers the server and finishes the job. - ServerUtils.java - This is the utility class for the operation to be performed on the server like discover server, start server, kill server, stop server, delete server, restart server etc. Inside this class, corresponding method from the JobUtils.java class is called which thus perform the required function by directly calling the methods from the core functionality. - TestUtils.java - It is a class which contains all the method used for cleaning the environment after a test method or a test class. It also contains the functions to clean9 the configuration file depending on the user requirement. This file has some methods which return the updated object after every operation. This is useful in that sense that there is no need to refresh all objects explicitly every time. Also there is no chance of error if someone forgets to refresh that object before asserting it with the updated value. 2. Builder :- This class contains the following two classes: - ServerBuilder.java - It has the wrapper function to discover server directly using utility code. The utility code gets access to the available manager and server from configuration file and then it calls core functionality to discover the corresponding server. It actually implements the polymorphism concept. It has two methods with same name but different configuration. One of the methods accepts the objects of manager and server configuration file. The second method accept all details of server like name, hostname, virtual IP, manager id etc. - RandomStringGenerator.java - This class generates a random string of a combination of alphabet, numbers and special characters. It is used as the Universal Unique Identifier (UUID) for servers and managers. This is useful to retrieve all the information about servers and managers using UUID. The length of the string is equal to the integer value passed as argument. Public static String generateRandomString (int length) is the method. It throws an exception if the length passes as an argument is less than 1. 9 Clean means deleting the old configuration and setting it as default setting for another test or test class.
  • 42. 42 3. WSAPI: - This package contains the core functionality (OvmWsClient.java) and the model objects (the API). To perform the experiment, only server.java class has been written for the sake of simplicity. Also at start of thesis work, I was familiar/working with this class at company work. Therefore I also choose this class for experiment purpose. There were 90 classes 35 enumeration and 1 exception class in the original API used in the company. - ImplementationType.java - This class decide the type of communication protocol (SOAP, REST-XML, REST-JSON) to be used based on days of the week. It has a method login which checks for the day in calendar and then decides to choose any one communication protocol for login. - Login.java - This is an interface which has an abstract method. It is been added to observe the test coverage of interface and its abstract method. The other purpose of adding this class is to observe the behavior of call backs using both test coverage approaches. - OvmWsClient.java - This class is the core functionality. Every method in this class returns a job. For the experiment purpose job related to server has been implemented. In actual class used in the industry, this class is 435 methods for all the core functionality. It include jobs like server discovery, network creating, server pool creating, adding and removing server from the pool, etc. - Server.java - This class is the model class. It has all the getter and setter of server attributes. In actual industry project there are 90 model objects like server, server-pool, network, storage etc. 4. Config: - This package has the class with configuration file10 of all the available servers and managers. Every tester has given 2 servers and 1 manager to run their own test. And they use the same class to fetch their own servers and manager. By default 3 servers and 2 managers were included in the configuration file for the project setup. - LAB.java – This class contains the information about the server and manager stored. It is used to fetch the servers and managers during the test. - ManagerConfig.java - This class contains all the setter and getter of the attributes of a manager at client side. - ServerConfig.java - This class contains all setter and getter of the attributes of a server. The client code gets server configuration from Lab.java class. Now it uses object of ServerConfig.java class and passed it to the other utility class which then calls the method core functionality. And here the actual conversion of data type takes place. The returned object is of server class. The conversion of object from one data type to another takes place in the method which calls the method of the core functionality. 10 A configuration file is a file which has default manager and servers stored for experiments.
  • 43. 43 5. Parser: - This package contains one interface which is use to make @ReadOnly11 annotation. This annotation is used to make a method as read only to identify them and later this can be used to remove from the test coverage report. It also has one java reflection class to parse all those methods which are annotated with @ReadOnly. There are some attributes which cannot be changed by the setter method, even if we use them, hence all such methods are declared as read-only method. It does not mean that those methods cannot be used, but it means that they don’t have any effect even if they are used. - AnnotationParsing.java - This class used java reflection to parse the class which contains the @ReadOnly method. All those methods are removed from the signature file to get more accurate test coverage report. - ReadOnly.java - This is an interface which defines the @ReadOnly annotation using java @Target and Retention policy. Figure 19 below show the how to define our own annotation. Figure 19: Define your own annotation using java - Bicycle.java – This class is a super class written for the experiment of inheritance. - MountainBike.java – This is sub class written for the experiment of inheritance. Figure 19 shows how to define our own annotation using java retention policy. @Target specifies that this annotation can be applied to a method. @Retention specifies that the annotation is kept at runtime. The name of the interface can be used as the name of the annotation. It is also same as the name of class file The diagram shown in figure 20 is high level class diagram. It includes all the classes including test class. It shows the diagram according to the call of every class from the test class. The diagram is showing firstly the call of test class (WsApiCoverageTest). Then the test class is calling ServerBuilder.java, ServerUtils.java, LAB.java, TestUtils.java and Server.java directly in the test class. Now these classes call other classed. Similarly all other classes are called with the same hierarchy call level. It shows three test classes and all other classes called in them. The multiplicity is shown as 0..1 and m_ovs shows this is been called by this object. Low level class diagram is shown in appendix E with more detail. 11 Read Only methods are those methods which does not have any effect, even if we use it. For example some of the attributes of a class cannot be changed by setter method and hence, this setter method acts as read only. To identify those types of methods we need an annotation as @ReadOnly.
  • 44. 44 Figure 20: High Level Class Diagram and it call start from the test class for server operation
  • 45. 45 6. Test Experiments and Results This chapter develops a model of API testing using the model of GUI testing as a reference. Then it derives the metrics from the model of API testing. Results are shown of experiments executed using static and dynamic test coverage approaches based on the metrics derived. This chapter is divided into three subsections as follows: 1. First section develops a model of API testing and derives metrics from this model. 2. Second section is describing experiments executed on the basis of those metrics using both test coverage approaches. 3. Second section is describing experiments executed on the basis of some extra metric measured by static test coverage analysis. 6.1 Model of API Testing This section develops a model of API testing [13]. It uses the model of GUI testing [14] as a reference. It first discusses model of GUI testing and then model of API testing. 6.1.1 Model of GUI Testing In GUI testing (as opposed to system testing over the GUI interface), testing are evaluated to make sure the following criterion works correctly [28]: 1. Make sure that GUI is correctly structured for e.g. a. The right elements on the right pages b. The right navigation between pages 2. Make sure that graphics elements are correctly presented. 3. Make sure that user inputs are correctly interpreted into internal (normally binary) format: e.g. "10" to the value 10 (00001010 binary). 4. Make sure that user inputs result in the correct actions for e.g. a. Calling the correct methods to do validation of input data. b. Calling of the correct methods to execute the "action" of buttons etc. c. Calling the correct "action" methods for pull-down and popup menus. d. Calling the correct "action" methods for keyboard shortcuts. e. Calling the correct "action" methods for mouse actions. 5. Make sure that outputs from the program are correctly displayed via the GUI to the user (that is it is correctly converted from internal format to GUI format such as text, graphics elements such as dials and indicators, etc.) and then shown in the correct GUI element (not mixed up).
  • 46. 46 6.1.2 Model of API Testing Similarly based on the model of GUI testing the model of API testing can be developed to evaluate the following coverage criteria. 1. Ensure that the API is visible with the correct protections for eg. a. private int integer; b. protected String str; c. public void print(); etc. 2. Ensure the correct conversion from 'external' to 'internal' format for eg. a. int stringLength(String str), b. String generateRandomString(int), etc. 3. Ensure the correct transfer of input parameters to the actual internal methods called. 4. Ensure that the correct internal classes/methods are called from/via the API (particularly important in the case of extending a class or implementing an interface). 5. Ensure that the outputs are correctly passed back over the API to the calling program (correct callbacks are executed). 6. Ensure that we are getting the correct response in the form of data, data type, order of data, and completeness of information. Based on the model of API testing develops, Following metrics were derived to evaluate experiments using both static test coverage analysis and dynamic test coverage execution. 1. Method/Class/Package test coverage. (General Test Coverage) 2. Interface and their abstract members test coverage. (Criteria 1,4) 3. Callbacks test coverage. (Criterion 5) 4. Data type conversion test coverage. (Criterion 2) 5. Inherited Class/Method test coverage. (Criteria 3,4) 6. Direct and indirect classes/method test coverage. (Criterion 6) The following section of this chapter is discussing the test coverage all the following mentioned metrics using both static and dynamic test coverage approaches. 6.2 Experiments on Metrics Derived From the Model of API Testing This section discusses the experiments performed and result obtained using both static and dynamic test coverage on the basis of all the metric derived from the model of API testing. Every experiment is divided into two parts. One discusses the experimental setup and second discusses the result obtained from the experiment. This section also shows table 6.1 giving brief information about all experiments, purpose and section which describes the experiment setup and result. All the results obtained are explained and evaluated in next chapter.
  • 47. 47 Table 6.1: Brief description of all the experiments Experiment No. Purpose of Experiments Section 1 To determine test coverage result of all package, class and methods using same number of test cases. 6.2.1 2 To evaluate test coverage result of Interface and its abstract members. 6.2.2 3 To evaluate test coverage result and execution of callbacks. 6.2.3 4 Test coverage of methods which converts object of one data type to another. 6.2.4 5 Test coverage result of inherited class and members functions. 6.2.5 6 Test coverage of class which is directly and indirectly accessed by test class. 6.2.6 7 Test coverage result of default constructor. 6.3.1 8 Test coverage result of attributes (data members) of a class. 6.3.2 9 How to remove those members of class, which are not useful, from test coverage report like read only members, constructors, enumerations etc. to determine more accurate results. 6.3.3 Note: Experiment number 7, 8, 9 are not based on the metrics derived from the model of API. These additional experiments were observed by me during executing other experiments. These three experiments are discussed in 6.3 Experiments on Additional Metrics Covered By Static Test Coverage Analysis. 6.2.1 Method/Class/Package Test Coverage This experiment was executed to observe the package, class and method coverage by both static and dynamic test coverage approaches. 6.2.1.1 Experiment 1 Same test cases in one test class were written to observe test coverage using both test coverage approaches. Test written for this experiment is shown in figure 21. Figure 21: Test class for testing core functionality and API
  • 48. 48 The class shown in figure 21 is tests related to testing the core functionality of server class. This test were analyzed by static test coverage and executed by dynamic test coverage to observe if they both give the same of different test coverage of class, package and methods in every class. This test class includes every test case to test methods only. 6.2.1.2 Result 1 The result shown in figure 22 contains result using dynamic (left side) and static (right side) test coverage. From left side of figure, it can be seen that in wsapi package have over all test coverage of 93.33% by static analysis and 93.5% by dynamic execution. These two results are almost same. There are only two differences as follows: 1. One difference is that the dynamic test coverage execution does not include interface test coverage (It is discussed in detail in next experiment). But the static test coverage includes interface and its abstract method test coverage in result. 2. The second difference is that static test coverage considers enums as separate class but dynamic consider it as a member of class. This leads to different test coverage for the class and enumeration. But it does not have too much difference in the overall test coverage percentage. Figure 22: Class/Method Test Coverage By dynamic (left) and static (right) test coverage approaches 6.2.2 Interface and Their Abstract Members Test Coverage This experiment is to show how Java interfaces and its abstract member test coverage is generated. A class implements an interface, thereby inheriting the abstract methods of the interface. An interface is not a class they differ in the following way.  A class describes the attributes and behaviors of an object.  An interface contains behaviors that a class implements.
  • 49. 49 The abstract methods are declared in the interface and defined in the classes implementing the interface. An Interface is a collection of abstract methods. 6.2.2.1 Experiment 2 An interface was written in “wsapi” package. It has an abstract method “methodToCallBack()”. This interface is implemented by the ManagerConfig.java Class in the config package. Now in test class m_wsapi = m_ovmm.getWsApiConnection() is called to create the connection between client and OVMM. The “getWsApiConnection()” (is a method of the class implementing the interface) calls the abstract method. 6.2.2.2 Result 2 Figure 23: Interface and abstract method coverage by static (right) and dynamic (left) approach As shown in figure 23, the result of test coverage by static approach is on right side. The lines in blue color are interface and its abstract method. It is observed that they have been marked as covered under “wsapi” package. On the left side, is the test coverage report generated by dynamic tool, eclemma [25, 26]. In this report, the interface is not mentioned in the “wsapi” package. And on observing the class, which implements the interface, “ManagerConfig.java” the abstract method “methodToCallBack()”becomes the member function of this class in the test coverage report. This experiment shows that: 1. Static test coverage analysis shows the test coverage of interface with the abstracts member in the package it was defined. 2. Dynamic tool does not show interfaces. But it shows the test coverage of abstract methods of the interface as a member function of the class which implements the interface.
  • 50. 50 6.2.3 Callbacks Test Coverage Callbacks refer to a mechanism in which a library or utility class provides a service to other class that is unknown to it when it is defined. In term of computer programming, callbacks are defined as a piece of executable code that is passed as an argument to other code, which is expected to callback or execute the argument. The UML diagram of working callback is shown in figure 24. There is an interface, Login.java and an abstract method, methodToCallBack() declared inside the interface. A Java class which implements an interface has a method which calls another method of same class or another class. Then this called method calls the abstract method of the interface. This process is referred as callback. Figure 24: Uml Diagram to explain the working of Callback 6.2.3.1 Experiment 3 The implementation for the callback is shown in figure 25. There is a interface “Login” and it has an abstract method “methodToCallback()”. ManagerConfig.java implements this interface. This class has a method which call another method “Callback()” of same class. Inside Callback() the abstract method of the interface is called. Figure 25: Callback Implementation and its call using other class in java
  • 51. 51 6.2.3.2 Result 3 Figure 26: Callback test coverage by static (right) and dynamic (left) approach Figure 26 shows the result of test coverage generated by static test coverage analysis on the right side and dynamic test coverage execution on the left side. Static test coverage shows the interface and its abstract method test coverage in the package it was declared. But dynamic test coverage execution does not show the interface test coverage in the “wsapi” package. And the abstract method “methodToCallBack()” becomes the member function of the class which implements the interface. Rest of the test coverage is same for both the test coverage method. Only difference is in the test coverage interface and its abstract methods. This result is same as the shown in experiment 6.2.2 Interface and Their Abstract Members Test Coverage. From these two experiments it is observed that the static test coverage works on method declaration while dynamic test coverage methodology works on method definition. 6.2.4 Data Type Conversion Test Coverage Method conversion is a process in which a method accepts argument of different data type and return objects of different data types. These types of methods can generate bug, it is important to test these methods and observe the test coverage of all these method. There were some of the utility codes written which were actually converting the object of configuration type into the object of API by using the core functionality. For example, two functions shown in figure 27.
  • 52. 52 Figure 27: Example code for conversion of data type of objects 6.2.4.1 Experiment 4 The methods shown in figure 27 are the two different methods of discovering servers. Both methods are having same name and return objects of server class. The difference between the two methods is that both accept different arguments. These are other method also which offers same functionality by calling methods of core functionality. Test cases were written just to call all of such method. There are 4 classes which performs the data type conversion. These are as follows:  ServerBuilder.java  JobUtils.java  ServerUtils.java  RandomStringGenerator.java Test class were having all the test cases which call all the method of these above mentioned class to observe the test coverage by both static test coverage analysis and dynamic test coverage execution.