SlideShare a Scribd company logo
1 of 22
Download to read offline
Introducing MyContainer
In-Container Testing for gCube Services
I am going to motivate and present tools to test service code in two classic scenarios:
• manual testing : i.e. in the tight, implement<->debug cycle of coding sessions 
• automated testing: i.e. during local or remote build processes, whether pre-commit builds, nightly
builds, or the continuous integration builds of a good future
In particular, I will illustrate the advantages that these tools offer over existing practices.
Context
My overall impression is that we currently pay lip service to testing. It seems to me that we do so:
• across the entire spectrum of grains, scopes, and forms which may be associated with the
concept
• in spite of a decade of pragmatic and analytic evidence that testing is in fact highly beneficial,
not just to functional correctness but also to design quality.
To start at the beginning, unit testing seems to be scarcely practiced in the project.
It seems so even if we stick with the classic view of testing (validate existing implementations) and
otherwise decide to ignore more "agile" perspectives (design and implement to pass an existing
test, TDD). 
Some unit tests do exist buried in our components, but I believe these are not executed in our
nightly builds, which is the only form of automation we have adopted so far for code integration.
This alone sets us quite a long way off the practices continuous integration, which takes testing
to be an integral part of any build process.
As far as development practices go, one may say that we are at least 10 years behind the state of
the art.
Some Reasons
The blame is partly on our technology stack, including gCore, which does not promote component
isolation and thus inhibits standard testing techniques (e.g. mocking). 
We can overcome these problems through careful design and isolate the stack as piece of legacy
technology.  However, doing so is not always easy even when it is possible. Most importantly, I
believe it has not been done so far.
This tells us that, as a project, we are still to  acknowledge the importance of systematic,
reproducible, and automated testing. I am a living proof that pursing testing in design, practices,
and tools requires "education", not to emphasise that I am educated but to admit that I was utterly
ignorant until recently.
In-Container Testing
There are gCube components in which unit testing is much easier to practice, such as libraries and
plugins. I believe that we should pursue it aggressively in these cases, as we have started doing
for Content Management for a year or so.
Yet, the bulk of our code runs inside a container and has remote clients. If enabling unit testing of
service code would give us precious feedback, we have an even stronger need for integration
testing. 
The notion of integration testing covers a wide spectrum beyond unit testing, from integration of
service components to integration of gCube services over a wide-area network. 
Our need is particularly stronger just one step beyond the scope of unit testing. Given our choice of
technologies, this extra step brings us to what people often refer to as in-container testing, where
tests exercise the functional and non-functional features of a target service which is deployed
within a target container. 
In our case, in-container testing is the first chance we get to test the integration between the
individual components of a service (if we used other technologies, such as Spring, there would
be earlier opportunities, but we do not use them and those opportunities do not materialise) . This
is also where we test integration between service components and client components and, most
crucially, between service components and container components,  particularly gCore
components. Are interfaces supportive of functional requirements? Do client requests serialise and
deserialise correctly? Does the service manage its state correctly? Does it produce expected
outputs? Does it publish and retrieve information from the Information System as expected? Does
it work correctly in multiple scopes? How does it behave under concurrent load and with large
payloads? etc... 
System Testing
The questions above are among the first ones, both functional and non-functional, that we try to
answer in the process of developing new services or modifying existing services. Sometimes these
are also the only questions.
Often, however, our services have runtime dependencies that:
• cannot be satisfied within a single container (external dependencies)
• are not the ubiquitous ones towards to the Information System (publication and discovery).
In this case, integration testing may take a broader scope and a coarser grain,  requiring the
staging of multiple services in multiple containers. This is system testing and we approach it in
cooperatively managed development infrastructures, such as devNext. 
Unsurprisingly, it has been noted that this solution has proven inadequate in ensuring a stable and
reliable testing environment. It has been noted that a development infrastructure, at times more
than others, approximate wilderness. It has also been noted that improving over this solution is of
key importance for our future. 
Yet I believe that improving in-container testing is even more crucial, for two reasons. 
• if we can discover bugs or design inefficiencies in the first testing environment that allows us to
observe them  (sure, earlier for some services that for others), then less bugs will need to be
found in the wilderness; releases will be faster and less painfully staged. Overall, less bugs will
risk to make it to production.
• most importantly, I believe   improving in-container testing is a necessary step towards better
solutions for system testing .
I will not speculate today on what these solutions may be, as it would be premature. However,
some ideas - not entirely new in fact - are starting to emerge precisely as generalisations of early
experience with in-container testing.
Status Quo
So, what is to improve upon when it comes to current practices for in-container testing?  What do
we do today to test services that run inside containers? 
Practice and mileage may vary, but I think it is safe to assume that most of us rely on so-called test
clients. 
We first build and deploy the service code in some target container, then we launch the clients and
observe the outcomes.  We then correct/evolve the clients in response to failures/changing
requirements, and relaunch the tests (one hopes!). 
That is pretty much it.
Looking at the problems
Though quickly described, this approach to testing is complex and inherently manual.
In particular, it results in tests that: 
• are not repeatable
• are hard to share within a team
• execute more slowly and less frequently than they should
• are never executed within build processes.
It seems to me that the root problem here is that tests and container have different lifetimes and
are managed in different environments.
They have different lifetimes in that, to retain some sanity, we tend to use containers that are
dedicated neither to the test/test-suite nor to the service targeted by those tests. This means that
the state of the container may change across runs of the same test; the environment in which it
runs may change and so may its configuration.  With our containers in particular, libraries
may come and go freely at the rhythm of deployments and un-deployments. As a result, a test that
works today may fail tomorrow on the same machine without any intervening change to the service
code or the test.  
Within teams, these reproducibility problems can be observed even more across different
machines, i.e. across space as well as over time.  And sharing an installation of the container
creates its own problems, to do with distributed management and poorer and slower working
environments.
Sharing the tests is complicated in itself. These depend on the physical location of the container,
its endpoint, and the various environment variables or property files that one normally uses to push
these contextual dependencies outside test code. Often undocumented, these contextual
dependencies result in test clients which are understood and thus executed only by their author.
This means that they are executed too late with respect to code changes that have been applied
and are better understood by other team members.
Reproducibility and sharing issues aside, the separation between tests and container makes for
containers that contain more deployments than the tests actually need (local services, globus
notification services, etc).  Startup and execution times become longer and coding sessions
slower. 
A lot of our time goes also in managing the container's lifetime, i.e. starting and stopping the
container before and after the test. In most cases, we do this from the console, in an environment
other than the IDE in which we author test and production code. If we do manage the container's
lifetime from within the IDE, we end up creating the same kind of synchronisation problems for the
team which we have already discussed for the test clients.
What is probably most time-consuming for us is having to go through build-and-deploy cycles at
each and every change in the code. Testing a one-line change in service code tends to take tens of
seconds rather than milliseconds.
All these inefficiencies  push developers to seek testing feedback less incrementally than they
should. The later the feedback the harder it is to pin down problems and sort them out.
Notice that all the problem above become worse if  containers are fully CONNECTED to a
development infrastructure, regardless of actual test requirements. This effectively means that we
do system testing even when we could do in-container testing, i.e. in a scope which is considerably
more complex to control. I suspect not many containers join infrastructures in STANDALONE mode
(this is a “stealth mode”: the Information System can be queried but the service leaves no visible
traces in the infrastructure; the container can come up and go down without causing disruption,
and many runtime activities of the container are avoided, which makes for quicker startup and test
execution).
Finally, how are we to automate this approach to integration testing?  I do not know how to answer
that question, but I suspect that it is very difficult if not impossible. This means that we cannot test
the code as part of our local or remote builds, which decreases our chances to catch regressions
errors. Our confidence in changes then diminishes and design enters a state of paralysis.
Having optimised our development practices over the years may make for problems that occur
more occasionally than they could. This does not mean that they do not occur, or do not occur
when there is less time to handle them, typically under release . Mostly, it does not mean that we
should not put our time in more creative implementation and design activities!
Requirements
It seems to me that improving over the status quo calls for tighter integration between the tests and
the container in which we deploy the service under testing.
To address problems of test reproducibility and test performance, we need a container which is
entirely dedicated to the service and its tests. Only so we will get a guarantee that, at each test
run, the container is configured with no less and no more deployments than are required for that
run. 
To make this viable and to address problems of development efficiency, test share-ability, and test
automation, we need container and tests to share the same execution environment. In other
words, we need a container that can be embedded in the tests, i.e. can be configured, started,
stopped, and used from within the tests. 
If container and tests run the in the same JVM they will “see” the same classpath resources,
including service code. This means that changes applied in a coding session from the IDE will be
immediately “live” within the container, i.e. we will not need to explicitly build and deploy them
before test execution (manually or not, from within the same or other development environment,
we just won't).
We will still have requirements for explicit deployment and undeployment but these will be limited
to resources which should not or cannot be on the classpath, such as:
• WSDL interfaces, scripts, and various forms of configuration files that may have changed since
the last test execution
• libraries and GARs of other services that the test requires to be co-deployed with the target
service.
Like the container, we need to embed these light-weight deployments and un-deployments in the
tests, as pre-conditions and post-conditions to test execution.
As an important side-effect of the single JVM assumption, the tests will be able to obtain
references to service and gCore components as these run in the container, including port-type
implementations, service contexts, resource homes, the GHNContext, resource serialisations,
etc).  This means that we will be able to exercise not only client-driven tests but also service-
side test. We will be able to make assertions on the state of those components and on the state of
the container within the tests.
This potential will lead to service designs that are more testable than they are now. We will be
encouraged to design our service components so that they can be injected mock dependencies
during the execution of the tests. In other words, we will be able to lift well-known techniques for
unit testing in the context of integration testing, circumventing the obstacles to unit testing that I
have discussed above.
My Container
Over the past couple of months we have worked towards meeting the requirements above against
our current technologies.
The idea was to produce tools that supported an embedding of Globus and simplified its use for
in-container testing. Since we wanted to ultimately deliver a dedicated and friendly container, we
initially code-named the project Pasqualino. Eventually, we settled on a slightly more neutral
name, my-container.
The first difficulty for us was that, true to its age, the Globus container wasn't born to be easily
embedded. One way in which Globus customises Axis is by
hard-choosing its file-based initialisation mode. The
container comes up and seeks evidence of service deployments in distinguished folders on file
system. This means that an embedded Globus container needs nonetheless a physical
installation, i.e. cannot exist purely in memory.
The best that we could do was to target a minimal installation of the container: my-container
distributes in a tar-ball of about 50Kb and expands in 0.5MB of disk space. It does not contain a
single script, pre-deployed service, or in fact library. It contains the necessary support for
embedding deployment (see later), and configuration to start up on localhost:9999, in
DISCONNECTED mode for a number of known infrastructures, devNext by default. Differently from
standard container distributions, it also embeds the the storage of service state, for convenience of
post-test inspection during coding sessions.
With this distribution, my-container discourages deployments and startups which are not
defined in code, and it requires explicit control of the classpath.
empty
embedded storage
build support
The distribution of my-container is built in Etics, where it is available for manual download.
Underneath however, it uses Maven as the build system and is published at least every night in our
Nexus repository at http://maven.research-infrastructures.eu/nexus
We can manually download my-container and install it as a test resource of our components,
excluding it from version control. Even better, we can automate the download and installation of
my-container during the build of our components. As I will show later, we can achieve this
automation for our standard, Ant-based components. We can equally achieve it for the new breed
of Maven-based components that are we are slowly integrating within gCube in a parallel line of
work.
Runtime Library
The distribution of my-container satisfies Globus requirements for a physical installation. Next,
we needed to offer support to control it from within the tests. This support is provided by a
dedicated library, which we refer to as the runtime library of my-container. Like the distribution,
the library is built every night in Etics and, as a Maven-based component, is available in our
Nexus repository.
We can download the library and embed it in our components as a test library. We can submit it to
version control, or keep it outside our projects and explicitly depend on it for Etics builds. While
no decision has been made yet, it is possible that future versions of gCore may embed it, as much
as it now embeds Ant and JUnit ibraries.
The runtime library supports two modes of interaction with my-container:
• a low-level mode whereby we interact directly with my-container through an API
• a high-level mode whereby we use annotations and JUnit4 extensions to delegate interactions
with my-container
The high-level mode is recommended, as it makes test code simpler to write and read. The low-
level mode can be used for use cases which are not covered by the high-level mode. The choice
does not need to be exclusive, as the two modes can be combined within a single test or test suite.
We start from the beginning, looking at the low-level mode first.
The low-level API
The basic facility that we find in the runtime library is MyContainer, an abstraction over the local
installation of my-container which we use to interact with the container from within our tests.
The standard usage pattern is as follows:
• create an instance of MyContainer
• invoke the method start() on it
• write the test code proper, interacting with the instance if required by the test
• invoke the method stop() on it
In the first step we identify the local installation of my-container and get a chance to configure
the container for the test, including the service or services that we wish to deploy in it for testing
purposes.
In the second step we block until the container reaches:
• the state CERTIFIED
• the state DOWN or FAILED
• none of the states above within a configurable timeout
In the first case start() returns successfully and the test can progress further, in the latter two
start() raises an exception that fails the test.
In the third step, we write test code that will run “in” the container, i.e. in the same runtime that we
expect for service code. We can access the GHNContext to inspect its state, deploy some plugins,
register some listeners, etc. We can also access the port-types, contexts, homes, etc. of the
services that we have deployed in the container. We can then get to the usual testing business, i.e.
make assertions about the the state of these components and verify the occurrence of expected
interactions.
With the final step, we stop the container and perform some required cleanup.
Consider the simplest of examples:
MyContainer container = new MyContainer();
container.start();
container.stop();
Since we instantiate MyContainer without any configuration, the container will start up with
defaults:
• the installation will be expected in a directory my-container under the working directory
• the container will start on port 9999
• no service will be deployed in it
• the startup timeout will be 10 seconds
Since we also specify no testing code proper, the container will be stopped as soon as it reaches
an operational state (just READY in this case, as there are no deployments that require
certification).
Deployments
While the three lines above may serve as a “smoke test” for gCore itself, they are of little use for
service testing. To test a service, we need to be able to deploy it into my-container before start
up. And to do so from within test code, we need to model the the deployment unit in Globus, the
Grid Archive.
The runtime library includes the Gar class for this purpose. We can create a Gar instance and
point it to all the project resources that we wish to deploy in my-container, from Wsdls and
s u p p o r t i n g XML Schemas , t o c o n fi g u r a t i o n fi l e s (JNDI ,WSDD, profile.xml,
registration.xml,...), to libraries. Assuming our standard project layout, for example, we can
assemble a Gar of the service under development as follows:
Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
Here we use the builder API of the Gar class to mimic in code what we normally do with dedicated
Ant targets during the build of our service. Differently from those target, however, the
programmatic construction of the archive is independent of any project layout or underlying build
system. We add project resources by providing their paths relative to the project root (e.g. schema
and etc). With a project layout that aligns with Maven conventions, for example, we may write
instead:
Gar myGar = new Gar("my-service").addInterfaces("src/main/wsdl").
addConfigurations("src/main/resources/META-INF");
Before we get to actually deploying the Gar, there are few things to notice:
• we provide a name for the Gar under which its resources will be deployed (e.g. my-service).
As usual, this name must be consistent with relative paths to deployed Wsdls that we specify in
the deployment descriptor of the service. Our standard Ant buildfiles use package names for
the purpose and our deployment reflect this convention. We would then create Gar instances
accordingly, e.g. new Gar(“org.acme.sample”).....
• we provided relative paths to whole directories of resources. This is a convenient way to add
resources en-masse to the Gar, which matches well our current project layouts. If need arises,
however, we can point to individual resources using methods such as addInterface() and
addConfiguration(), which expect relative paths to individual files (e.g.
addInterface(“config/profile.xml”)). This supports non-conventional project layouts.
Equally, it allows us to “override” some of the standard resources, for exploratory programming
(e.g. to test the service with a non-standard profile). In these use-cases, we can first add
directories of standard resources and then add dedicated test resources that override some of
the standard ones.
• we have not added libraries to the Gar. This is because the service code is expected to be
already on the classpath, including generated stub classes (as usual, these are placed on the
classpath after previous building steps). As we discussed above, this code is immediately “live” in
my-cointainer. In some cases, however, the tests may require the deployment of libraries that
are not on the classpath (e.g. service plugin implementations). In these use-cases, we can use
the methods addLibrary() and addLibraries() to add these runtime libraries to the Gar.
Once we have assembled the Gar for deployment, we can pass it to the constructor of
MyContainer. When we invoke start(), the Gar is deployed in my-container before the
container is actually started. For example:
Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
MyContainer container = new MyContainer(myGar);
container.start(); //gar deployed at this point
Deployment goes through the steps we normally observe during a build. Underneath, in fact,
MyContainer invokes programmatically the same Ant buildfiles which are normally found in full-
blown container installations and which are retained in my-container (this ensures consistency
with external build processes). Wsdls will be extended with Globus providers and binding details,
deployment descriptors and JNDI files will re-named and post-processed for expression filtering
(e.g. @config.dir), resources will be placed in the container in the usual places (e.g. share/
schema/my-service, lib, etc/my-service), undeployment scripts will be generated
(undeploy.xml), ... Accordingly, we get a first form of feedback about the service under
development, even before we’ve exercised any piece of service functionality. If the container starts
up then our service has deployed correctly, otherwise we have made some mistake in the
configuration of the service which we rectify straight away.
Notice that we can deploy many Gars at once and there is no minimum requirement for what we
put in each individual Gar. This may be useful when we need to deploy auxiliary libraries, as we
can assemble the standard Gar for the service, a separate Gar for the auxiliary libraries, and then
deploy the two Gars together, e.g:
Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
Gar auxGar = new Gar("my-plugin").addLibrary("test/my-plugin.jar");
container = new MyContainer(myGar,auxGar);
container.start();
We can also construct a Gar instance from an existing archive:
Gar existingGar = new Gar("test/somegar.gar");
This is useful if we wish to place our tests outside the service under testing, in a separate module
that assumes that the archive of the service has been previously built and is available as a test
resource. It is also useful if the test requires the service to be deployed in my-container along
with other services, e.g.:
Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
Gar codeployedGar = new Gar("/test/other-service.gar");
container = new MyContainer(myGar, codeployedGar);
container.start();
Port-types and Endpoint References
After deployment and container startup, the test proper can begin. For example, a smoke test that
should be part of the test-suites of all our services is the following:
Gar myGar = ...
container = new MyContainer(myGar);
container.start();
assert(ServiceContext.getContext().getStatus()=GCUBEServiceContext.Status.READIED);
container.stop();
This test inspects the state of the service to ensure that it has come up correctly in my-
container. Next, we will want to test the methods of the service API. We can do this in either one
of two ways:
• by invoking directly the methods of some port-type implementation (internal tests)
• using the service stubs, as a client would normally do (external tests)
For internal testing, we need access to the implementation of the port-type. As this is instantiated
and controlled by Globus, we cannot access it from other service components but need to ask
MyContainer for it, e.g.:
Stateless pt = container.portType(“acme/sample/stateless”, Stateless.class);
...pt.about(...)...
Here we pass the name of the port-type to the container (“acme/sample/stateless”, as it
would be specified in the deployment descriptor of the service), along with the class of the instance
that we expect back (Stateless). We the invoke directly a method on the port-type that we wish
to test (about()).
For external testing, we need an endpoint reference to the port-type. Again, we ask MyContainer
for one:
EndpointReference epr = container.endpoint(“acme/sample/stateless”);
StatelessPortType pt = new StatelessServiceAddressingLocator().getStatelessPortTypePort(epr)
...pt.about(...)...
Clearly, external tests gives more feedback than internal ones, at the cost of slightly slower
execution times (but remember, we are using localhost!). They flag any problems we may have
with input and output serialisations. If we experience any such problem during development, we
may want to temporarily enable org.apache.axis.utils.tcpmon on a port other than the
container’s, so as to inspect the serialisations directly. In this case, we need an endpoint reference
configured for the monitored port, e.g. 9000:
EndpointReference epr = container.endpoint(“acme/sample/stateless”,9000);
Once we have sorted the problem out, we can revert the code to use endpoint references that
point to the container’s port, as the we will not have or want the TCP monitor running when the
tests are executed non-interactively during build processes.
Based on these basic facilities, the precise actions that we take in our tests depend on how we
designed the service and on our ingenuity. The possibilities are actually endless. If we need to, we
can obtain from MyContainer access to key locations in the container. For example:
• configLocation() gives us access to the configuration directory of my-container. This
allows us to override key configuration files before we start the container (e.g. add a
ServiceMap, deploy a custom GHNConfig.xml file, enable security, etc..)
• storageLocation() gives us access to the storage directory of my-container, where we
can find the serialisations of any stateful resources that we may have created during the test
(e.g. to confirm the creation of such serialisations)
Other key locations are also available through MyContainer (location(), libLocation(),
deploymentsLocation()), though these are used primarily by MyContainer itself and are
unlikely to be targeted by our tests.
Overall, we can access in principle any gCore component and service component that may enable
us to exercise the intended behaviour of the service under testing.
Logging
One immediate advantage of running tests and container in the same JVM is that the logs emitted
by either are merged in a single log. This give us a full picture of the execution, a picture that can
be delivered to the console of our IDE.
Building on this potential, the distribution of my-container includes a log4j.properties
configuration file, which is loaded up dynamically as soon as we instantiate MyContainer. In it,
the loggers used by Globus, Axis, and gCore are configured to append to the console (only
warnings in the first two cases). So, we do not need to take any action to find the container’s logs
in, say, our Eclipse console:
As a further convenience, log4j.properties in my-container also includes configuration for
loggers called test. This gives us a configuration-free way to log from within our test code. For
example using loggers in test code as exemplified below:
private static Logger logger = Logger.getLogger("test");
...
@Test
public void someTest() throws Exception {
...
logger.info("in test!");
...
}
would result in similar logs:
[TEST] 14:17:50,086 INFO test [main,main:549] in test
Of course, this leaves out all the logs of the service under testing. To include them, we need to
place our own log4j.properties the test classpath and follow standard Log4j configuration
patterns. For example, if the the service uses loggers called org.acme...., then the
configuration could look like the following:
log4j.appender.ROOT=org.apache.log4j.ConsoleAppender
log4j.appender.ROOT.layout=org.apache.log4j.PatternLayout
log4j.appender.ROOT.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p %c{2} [%t,%M:%L] %m%n
log4j.rootLogger=WARN,ROOT
log4j.appender.ACME=org.apache.log4j.ConsoleAppender
log4j.appender.ACME.layout=org.apache.log4j.PatternLayout
log4j.appender.ACME.layout.ConversionPattern=[ACME] %d{HH:mm:ss,SSS} %-5p %c{2} [%t,%M:%L] %m%n
log4j.category.org.acme=TRACE,ACME
log4j.additivity.org.acme=false
The end result is that the console will merge logs from my-container, logs from the tests, and
logs from the service under testing, while still showing the provenance clearly. My personal
experience is that this merging proves extremely useful during debugging.
Test Isolation and Execution Performance
An important role of MyContainer is to promote the isolation of our tests. To this end,
MyContainer takes a number of actions on container startup, all of which are geared to wipe out
any form of state that my-container may have accumulated in previous tests. This discourages
us from basing some tests on the outcome of other tests, even when we can exert control over the
test order. In particular, MyContainer will:
• restore the default configuration of the container;
• clean the storage directory of any stateful resource serialisation;
• undeploy any Gar which is not required by the current test;
Notice that these actions are taken before the tests start, rather than once they have completed.
There are at least two important justifications for this timing choice.
Firstly, during coding sessions, it allows to inspect the state of the container as left at the end of the
tests. In particular, we can confirm our expectations as to the deployed resources and the stateful
resources that may have been created. Since my-container is installed within the service
project, we can easily do so from within our own IDE.
Secondly, MyContainer can optimise container start-up by avoiding unnecessary deployments. If
the resources in a Gar required by the test have not changed since their last deployment, re-
deploying the Gar is happily avoided, as shown in the logs:
[CONTAINER] ... INFO mycontainer.MyContainer ... skipping deployment of sample-service because it is unchanged
The optimisation is significant, as deployments are easily the most time-consuming operations
during the execution of a test, especially when services have multiple port-types and a large
number of operations . Without them, my-container will start in less than 3 seconds, true to the
promise that an embedded container will make for very efficient interactive testing during coding
sessions.
To detect change, Gar instances keep track of the time of last modification of their resources.
Whenever we add a resource or a directory of resources to the Gar, the resource which has most
recently changed provide the the time of last modification of the whole Gar. MyContainer then
compares this time with the time in which a Gar with the same name was last deployed, which is
the time of last modification of the undeploy.xml file for that Gar.
Given MyContainer’s help in terms of test isolation and performance, the actual degree of test
isolation is our responsibility. For maximum isolation, we could use different instance of
MyContainer in each test. This, however, with its own drawbacks. First, there is a performance
issue. While my-container starts quickly, especially when deployments are optimised away, we
are nonetheless talking seconds rather than milliseconds. Second, Globus and gCore make
heavy use of static variables, and this may reintroduce issues of test isolation, which we wanted to
reduce in the first place.
I believe we can obtain a good compromise between test isolation and test performance by sharing
a single instance of MyContainer across a suite of strictly related tests (e.g. create tests, read
tests, write tests, and so on). All the tests we place in the suite above share the same instantiation
and configuration of the container. We pay the startup price once, and then execute each test in
the suite in milliseconds, i.e. in timings that we’ve come to associate with unit testing (and even if
we test service operations externally, through stubs).
JUnit Embedding
Where are we going to place our test code? We could put it in the main() method of a test client,
of course, but the recommended approach is to embed it in a more suitable testing framework,
such as JUnit. By doing so, we get a clear structure, proper integration with IDE and build tools,
and a host of testing facilities which are standards de facto.
One mapping of out testing pattern in JUnit is the following:
public class MyTestSuite {
static MyContainer container;
@BeforeClass
public static void startup() {
Gar myGar = ...
container = new MyContainer(myGar);
container.start();
...
}
@Test
public void someTest() throws Exception {...}
@Test
public void anotherTest() throws Exception {...}
...
@AfterClass
public static void shutdown() {
container.stop();
...
}
}
Here, the instance of MyContainer is shared across the tests of a suite, as per the approach
recommended above. The static methods annotated with JUnit‘s @BeforeClass and
@AfterClass methods are used to start and stop the container, respectively. Methods annotated
with JUnit‘s @Test are the individual tests of the suite.
Annotation-driven Tests
The JUnit skeleton above can be taken as boilerplate code for our test suites with my-
container. The runtime library builds on the extension facilities provided by JUnit to spare us
this boilerplate and, more generally, to avoid most of the interactions of MyContainer that we
have presented so far (creation, deployment, start/stop, obtaining port-type implementations and
endpoint references, ..). This is the high-level mode supported by the runtime library.
When we work in this mode, we simply annotate the test-suite as follows:
@RunWith(MyContainerTestRunner.class)
public class MyTestSuite {...}
MyContainerTestRunner is a JUnit 4 test runner which replaces the default one to:
• create, configure, and start an instance of MyContainer before any other code in the test suite
is executed by JUnit
• inject into the test-suite any port-type implementation or endpoint reference which we may need
• clearly name the output of any test with the name of the test itself
• stop the underlying instance of MyContainer after any other code in the test suite is executed
by JUnit
For example, our skeleton now takes this simpler form:
@RunWith(MyContainerTestRunner.class)
public class MyTestSuite {
@Test
public void someTest() throws Exception {...}
@Test
public void anotherTest() throws Exception {...}
...
}
This does not mean that we cannot have @BeforeClass and @AfterClass methods, only that we
do not need to have them only to start and stop a container.
Of course, we still need to be able to provide our Gar/s to the underlying MyContainer. However,
we can do so indirectly now, by exposing static fields appropriately typed and annotated. Our test
runner will recognise these fields and pass the information they provide on to the instance of
MyContainer that the runner handles on our behalf, e.g.:
@RunWith(MyContainerTestRunner.class)
public class MyTestSuite {
@Deployment
static Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
@Test
public void someTest() throws Exception {...}
@Test
public void anotherTest() throws Exception {...}
...
}
Here, we have used @Deployment annotation to flag a static field of Gar type to the runner. The
runner will use it when it creates the instance of MyContainer. Since we can deploy as many
Gars in my-container as we need to, we can have multiple fields annotated with @Deployment
and of type Gar in our test-suite.
Similarly, we may define static fields for port-types and endpoint references and have the runner
set their values for us, e.g:
@RunWith(MyContainerTestRunner.class)
public class MyTestSuite {
@Deployment
static Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
@Named(“acme/sample/stateless”)
static Stateless pt;
@Named(“acme/sample/stateless”)
static EndpointReference epr;
@Test
public void someTest() throws Exception {
....pt.about(...)...
}
@Test
public void anotherTest() throws Exception {
... StatelessPortType ptStub =
new StatelessServiceAddressingLocator().getStatelessPortTypePort(epr);
... ptStub.about()...
}
...
}
Here we have reused @Named from the standard JSR-330 to require the injection a given port-
type implementation and of a endpoint reference for it. The runner will pick on these annotations
and set their values accordingly, well before the suite uses them in its test methods.
Through the same means, and if so required, the runner can also inject the underlying instance of
MyContainer in the test suite, e.g.:
@Inject
static MyContainer container;
where @Inject is also borrowed from JSR-330 to flag requests for (unqualified) value injections.
Having the instance of MyContainer within the test suite allows us to combine low-level and high-
level modes of interactions within the same test-suite. In particular, we can fallback to the API of
MyContainer when our tests need more staging flexibility and sophistication than annotations
can achieve.
Non-Default Configuration
In all the examples above, we’ve relied on defaults for the location of my-container, the port on
which it listens for requests, and the startup timeout. However, we may wish to override some of
these defaults to have more control on the install location, or to make for shorter or longer startup
times or, less commonly, to target a different port (for proxying issues or other regulations).
To do this, we can use the other constructors of MyContainer:
• MyContainer(String,Gar ... gars) is dedicated to non-default locations, which is the
most common scenario for overriding defaults. It remains that the input paths are resolved with
respect to the working directory, so as to discourage absolute paths which compromise the
reproducibility of tests (e.g. new MyContainer(“/src/main/test/resources”),...);
• MyContainer(myProperties,Gar ... gars)is the most generic of all constructors and
allows us to configure all the available properties in a Properties object, or only those we care
to override. Use the constants in the Utils class to name the properties to be overridden (e.g.
for example, Utils.STARTUP_TIMEOUT_PROPERTY).
Finally, note that all MyContainer constructors, including the no-arg constructor, will try to
complement the configuration properties that are implicitly or explicitly provided in code with those
that may be found on the classpath in a file called my-container.properties.
For obvious reasons, pushing non-default configuration in one such file is preferred over hard-
coding it in test code. This is particularly the case when we work with the annotations discussed
above, as the test runner will alway create a MyContainer instance through its no-arg
constructor. The property file thus allows us to override the defaults without renouncing to the high-
level mode of interaction with the runtime library.
Test Automation
Controlling my-container and the deployment process along the lines illustrated so far satisfies
the requirement for an efficient test and debug model during interactive coding sessions, typically
from within the IDE. Equally, it delivers on the promise for increased share-ability and
reproducibility of tests. In turn, this creates the basis for test automation, i.e. the possibility of
executing our tests during local or remote build processes. As we have already emphasised, test
automation is key to the development process and is one of the main goals behind the work on
my-container.
Given the facilities of the runtime library, automating the tests is matter of build configuration. As
such is is rather sensitive to the build system that we use, be it Ant, Maven, or other. In all cases,
however, we are after the possibility to:
• automatically download the distribution of my-container from a remote repository, and install it in
the project prior to launching the tests. Since MyContainer gives us good test isolation, we
want this to happen only if previous builds have not done it already;
• trigger test compilation and execution straight after compilation of service code, including
generates stub classes, with the implication ought to fail whenever a test does not pass.
Ant Automation
Let’s first see how we may achieve this automation within our standard Ant buildfiles. Our default
buildfiles have roughly the following target structure (up to target names):
This structure focuses on the independent generation of two types of build artifacts:
gar
package
process
WSDLs
compile
init
deploy
stubs
compile
Stubs
generate
Stubs
deploy
Stubs
• a Gar archive which packages service binaries, configuration, and Wsdl interfaces
• a Jar archive with binaries of stub code generated from Wsdl interfaces
Since we do not need to test generated code, we introduce testing only in the process of
generating the Gar archive. (As usual, an up-to-date stubs Jar must be on the test classpath for
both internal and external testing). One way of doing this leads to this modified task structure:
We have interposed test execution (test) between the compilation and packaging of service code
(existing targets package, compile), i.e. as soon as possible. Executing the tests requires the
compilation of the tests (compileTests) and the installation and download of my-container
(install-my-container, download-my-container). Of course, compiling the suites
requires compiling the service code first (existing target compile). Finally, the installation of my-
container can be removed at any point (remove-my-container) and the configuration of
most tasks is centralised in initTest.
An XML serialisation of this structure may look as follow:
<!-- run test suites -->
<target name="test" depends="compileTests,install-my-container" unless="test.skip">
<!-- compile test suites -->
<target name="compileTests" depends="compile, initTests" unless="test.skip" >
<!-- install my-container -->
<target name="install-my-container" depends="initTest" unless="test.skip" >
<!-- download my-container if not installed -->
<target name="download-my-container" depends="initTest" unless="my-container.installed" >
gar
package
process
WSDLs
init
deploy
compile
test
compile
Tests
install
my−container
init
Tests
download
my−container
uninstall
my−container
<!-- uninstall my-container-->
<target name="uninstall-my-container" depends="initTest">
<!-- target package service code -->
<target name="package" depends="test">...</target>
Notice that task dependencies are organised in such a way to minimise build time in case of
failures; e.g. when the service fails to compile the tests are not compiled, and when the test fail to
compile, my-container is not downloaded or installed.
Notice also that we can disable all the test-related targets on demand, by setting the test.skip
variable:
.../sample-service> ant -Dtest.skip=true
We could have taken the opposite route here and decided to enable test-related targets on
demand, using something like if=”test.do” on the test-related targets in place of
unless=”test.skip”. The choice depends pretty much on the discipline that we want to
impose upon ourselves.
With the target structure in place, let us look at the individual tasks, in order of their execution:
<target name="initTest" unless="test.skip">
<!-- my-container installation and download directories -->
<property name="my-container.install.dir" value="${basedir}" />
<property name="my-container.download.dir"
value="${my-container.install.dir}/.my-container" />
<property name="my-container.dir" value="${my-container.install.dir}/my-container" />
<!-- test source directory -->
<property name="test.src.dir" value="test" />
<!-- test library directory -->
<property name="test.lib.dir" vdoalue="test-lib" />
<!-- test binary directory -->
<property name="build.tests.class.dir" location="${build.dir}/test-classes" />
<!-- test reports -->
<property name="test.reports.dir" value="${build.dir}/test-reports" />
</target>
In initTest we specify the key locations for testing:
• where my-container should be downloaded and where it should be installed. For installation,
we choose the project root, where it will be automatically discovered by MyContainer without
the immediate need to define my-container.properties or to pass installation paths to
MyContainer constructors. Keeping the installation outside the build.dir avoids us to re-
download my-container after each cleanup. For similar reasons we also download my-
container under project root too but we choose a directory that stays hidden in IDEs. Notice
that install and donwload directories should added to the svn:ignore list at the commit time;
• where are the test sources and where are test libraries;
• where the test classes and test reports out to be written out. Since these outputs are transients
we place them under build.dir, so as to have them removed at each cleanup.
Next, we move to the management of my-container:
<target name="install-my-container" depends="initTest" unless="test.skip">
<available file="${my-container.dir}" property="my-container.installed" />
<antcall target="install-my-container" />
</target>
<target name="download-my-container" depends="initTest" unless="my-container.installed">
<mkdir dir="${my-container.download.dir}" />
<get src="http://maven.research-infrastructures.eu/nexus/service/
local/artifact/maven/redirect?r=gcube-releases&amp;g=org.gcube.tools&amp;a=my-
container&amp;v=RELEASE&amp;e=tar.gz&amp;c=distro"
dest="${my-container.download.dir}/my-container.tar.gz"
usetimestamp="true" />
<gunzip src="${my-container.download.dir}/my-container.tar.gz"
dest="${my-container.download.dir}" />
<untar src="${my-container.download.dir}/my-container.tar" dest="${basedir}" />
<target name="uninstall-my-container" depends="initTest">
<delete dir="${my-container.dir}" />
<delete dir="${my-container.download.dir}" />
</target>
In install-my-container we delegate to download-my-container indicating wether an
installation already exists or not. If it does not exist already, download-my-container fetches
the latest release of the from our my-container Nexus repository and unpacks it. uninstall-
my-container cleans up installation and downloads.
Now we move to compiling the tests:
<target name="compileTests" depends="compile,initTest" unless="test.skip">
<mkdir dir="${build.tests.class.dir}" />
<path id="test.classpath">
<path refid="service.classpath" />
<fileset dir="${test.lib.dir}">
<include name="*.jar" />
</fileset>
<pathelement location="${build.class.dir}" />
<pathelement location="${build.tests.class.dir}" />
</path>
<javac srcdir="${test.src.dir}" destdir="${build.tests.class.dir}"
classpathref="test.classpath"
includeantruntime="false" />
</target>
Compilation occurs in a classpath that adds the test libraries, the service binaries, and the test
binaries to the classpath already used to compile service code. Here we use a reference to
another path (service.classpath), though existing buildfiles may not name the service
classpath explicitly (use copy and paste then!).
What test libraries should be available? At the very least, a version of the runtime library of my-
container. Since we will want to run JUnit 4 tests, we will also need ant-junit.jar, as it is
included in any installation of Ant from 1.7.1 onwards (older version will not work). On the other
hand, we do not need to worry about JUnit binaries, which are bundled in a full-distribution of the
container. Of course, any other test utility, framework (e.g. mock libraries), or dependency that we
may be using in the tests goes in test.lib.dir.
Finally we get to test execution:
<target name="test" depends="compileTests,install-my-container" unless="test.skip">
<mkdir dir="${test.reports.dir}" />
<junit printsummary="yes" haltonfailure="true" fork="yes"
dir="${basedir}" includeantruntime="false">
<classpath>
<pathelement location="${test.src.dir}" />
<path refid="test.classpath" />
</classpath>
<formatter type="brief"/> <!-- usefile="false" to get logs in console -->
<batchtest toDir="${test.reports.dir}">
<fileset dir="${test.src.dir}">
<include name="**/*Test.java" />
<include name="**/*Tests.java" />
</fileset>
</batchtest>
</junit>
</target>
We execute the test in a separate JVM and against a classpath entirely under our control. In
particular, we do not use the local Ant runtime (which may vary) and prefer instead the Ant
support included in our standard container distribution. We add the test sources here, so as to pick
on all the resources that may have been placed there to be loaded by the tests (including
mycontainer.properties, log4j.properties, ...).
And that’s it. Launching this buildfile from console or from within the IDE will show us that,
whenever we do not explicitly disable it, the execution of our test suites has become integral part of
our builds. This will help us confirm that we have not introduced regression errors, as we re-factor
the code, before we commit the changes, and Etics integrates it in gCube every night.

More Related Content

What's hot

Scrum and Test-driven development
Scrum and Test-driven developmentScrum and Test-driven development
Scrum and Test-driven developmenttoteb5
 
Full Testing Experience - Visual Studio and TFS 2010
 Full Testing Experience - Visual Studio and TFS 2010 Full Testing Experience - Visual Studio and TFS 2010
Full Testing Experience - Visual Studio and TFS 2010Ed Blankenship
 
Why Developers Dig DevOps
Why Developers Dig DevOpsWhy Developers Dig DevOps
Why Developers Dig DevOpsBMC_DSM
 
Pete Marshall - casmadrid2015 - Continuous Delivery in Legacy Environments
Pete Marshall - casmadrid2015 - Continuous Delivery in Legacy EnvironmentsPete Marshall - casmadrid2015 - Continuous Delivery in Legacy Environments
Pete Marshall - casmadrid2015 - Continuous Delivery in Legacy EnvironmentsPeter Marshall
 
Continuous Integration, TDD & Living Documentation - Odoo Experience 2015
Continuous Integration, TDD & Living Documentation - Odoo Experience 2015Continuous Integration, TDD & Living Documentation - Odoo Experience 2015
Continuous Integration, TDD & Living Documentation - Odoo Experience 2015Colin Wren
 
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOps
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOpsDevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOps
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOpsSailaja Tennati
 
DevOps - Boldly Go for Distro
DevOps - Boldly Go for DistroDevOps - Boldly Go for Distro
DevOps - Boldly Go for DistroPaul Boos
 
Continuous testing & devops with @petemar5hall
Continuous testing & devops with @petemar5hallContinuous testing & devops with @petemar5hall
Continuous testing & devops with @petemar5hallPeter Marshall
 
Introduction to Acceptance Test Driven Development
Introduction to Acceptance Test Driven DevelopmentIntroduction to Acceptance Test Driven Development
Introduction to Acceptance Test Driven DevelopmentSteven Mak
 
Testing and DevOps Culture: Lessons Learned
Testing and DevOps Culture: Lessons LearnedTesting and DevOps Culture: Lessons Learned
Testing and DevOps Culture: Lessons LearnedLB Denker
 
Introduction to DevOps
Introduction to DevOpsIntroduction to DevOps
Introduction to DevOpsPRATYUSH SINHA
 
A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...
A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...
A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...Yuval Yeret
 
Key Measurements For Testers
Key Measurements For TestersKey Measurements For Testers
Key Measurements For TestersQA Programmer
 
DevOps Roadshow - removing barriers between development and operations
DevOps Roadshow - removing barriers between development and operationsDevOps Roadshow - removing barriers between development and operations
DevOps Roadshow - removing barriers between development and operationsMicrosoft Developer Norway
 

What's hot (20)

Scrum and Test-driven development
Scrum and Test-driven developmentScrum and Test-driven development
Scrum and Test-driven development
 
Full Testing Experience - Visual Studio and TFS 2010
 Full Testing Experience - Visual Studio and TFS 2010 Full Testing Experience - Visual Studio and TFS 2010
Full Testing Experience - Visual Studio and TFS 2010
 
Why Developers Dig DevOps
Why Developers Dig DevOpsWhy Developers Dig DevOps
Why Developers Dig DevOps
 
Pete Marshall - casmadrid2015 - Continuous Delivery in Legacy Environments
Pete Marshall - casmadrid2015 - Continuous Delivery in Legacy EnvironmentsPete Marshall - casmadrid2015 - Continuous Delivery in Legacy Environments
Pete Marshall - casmadrid2015 - Continuous Delivery in Legacy Environments
 
Continuous Integration vs Continuous Delivery vs Continuous Deployment
Continuous Integration vs Continuous Delivery vs Continuous Deployment Continuous Integration vs Continuous Delivery vs Continuous Deployment
Continuous Integration vs Continuous Delivery vs Continuous Deployment
 
Continuous Integration, TDD & Living Documentation - Odoo Experience 2015
Continuous Integration, TDD & Living Documentation - Odoo Experience 2015Continuous Integration, TDD & Living Documentation - Odoo Experience 2015
Continuous Integration, TDD & Living Documentation - Odoo Experience 2015
 
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOps
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOpsDevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOps
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOps
 
An introduction to DevOps
An introduction to DevOpsAn introduction to DevOps
An introduction to DevOps
 
DevOps for beginners
DevOps for beginnersDevOps for beginners
DevOps for beginners
 
DevOps - Boldly Go for Distro
DevOps - Boldly Go for DistroDevOps - Boldly Go for Distro
DevOps - Boldly Go for Distro
 
Continuous testing & devops with @petemar5hall
Continuous testing & devops with @petemar5hallContinuous testing & devops with @petemar5hall
Continuous testing & devops with @petemar5hall
 
Q!Digitz
Q!Digitz Q!Digitz
Q!Digitz
 
Introduction to Acceptance Test Driven Development
Introduction to Acceptance Test Driven DevelopmentIntroduction to Acceptance Test Driven Development
Introduction to Acceptance Test Driven Development
 
Testing and DevOps Culture: Lessons Learned
Testing and DevOps Culture: Lessons LearnedTesting and DevOps Culture: Lessons Learned
Testing and DevOps Culture: Lessons Learned
 
Introduction to DevOps
Introduction to DevOpsIntroduction to DevOps
Introduction to DevOps
 
A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...
A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...
A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...
 
Test driven development(tdd)
Test driven development(tdd)Test driven development(tdd)
Test driven development(tdd)
 
Top 10 Qualities of a QA Tester
Top 10 Qualities of a QA TesterTop 10 Qualities of a QA Tester
Top 10 Qualities of a QA Tester
 
Key Measurements For Testers
Key Measurements For TestersKey Measurements For Testers
Key Measurements For Testers
 
DevOps Roadshow - removing barriers between development and operations
DevOps Roadshow - removing barriers between development and operationsDevOps Roadshow - removing barriers between development and operations
DevOps Roadshow - removing barriers between development and operations
 

Similar to Technical Report: My Container

Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Abdelkrim Boujraf
 
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Applitools
 
Why do you need multiple qa environments
Why do you need multiple qa environments Why do you need multiple qa environments
Why do you need multiple qa environments Enov8
 
Implementing a testing strategy
Implementing a testing strategyImplementing a testing strategy
Implementing a testing strategyDaniel Giraldo
 
Why do you need multiple qa environments
Why do you need multiple qa environments Why do you need multiple qa environments
Why do you need multiple qa environments Enov8
 
201008 Software Testing Notes (part 1/2)
201008 Software Testing Notes (part 1/2)201008 Software Testing Notes (part 1/2)
201008 Software Testing Notes (part 1/2)Javier Gonzalez-Sanchez
 
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docx
30  February 2005  QUEUE rants [email protected] DARNEDTestin.docx30  February 2005  QUEUE rants [email protected] DARNEDTestin.docx
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docxtamicawaysmith
 
Software Testing
Software TestingSoftware Testing
Software TestingAdroitLogic
 
Agile Mumbai 2020 Conference | How to get the best ROI on Your Test Automati...
Agile Mumbai 2020 Conference |  How to get the best ROI on Your Test Automati...Agile Mumbai 2020 Conference |  How to get the best ROI on Your Test Automati...
Agile Mumbai 2020 Conference | How to get the best ROI on Your Test Automati...AgileNetwork
 
Best Practices for Applications Performance Testing
Best Practices for Applications Performance TestingBest Practices for Applications Performance Testing
Best Practices for Applications Performance TestingBhaskara Reddy Sannapureddy
 
Interview questions and answers for quality assurance
Interview questions and answers for quality assuranceInterview questions and answers for quality assurance
Interview questions and answers for quality assuranceGaruda Trainings
 
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Abdelkrim Boujraf
 
Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Ian McDonald
 
Bridging the communication gap
Bridging the communication gapBridging the communication gap
Bridging the communication gapGuillagui San
 
From Monoliths to Microservices at Realestate.com.au
From Monoliths to Microservices at Realestate.com.auFrom Monoliths to Microservices at Realestate.com.au
From Monoliths to Microservices at Realestate.com.auevanbottcher
 

Similar to Technical Report: My Container (20)

Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
 
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...
 
TestDrivenDeveloment
TestDrivenDevelomentTestDrivenDeveloment
TestDrivenDeveloment
 
Driven to Tests
Driven to TestsDriven to Tests
Driven to Tests
 
Why do you need multiple qa environments
Why do you need multiple qa environments Why do you need multiple qa environments
Why do you need multiple qa environments
 
Implementing a testing strategy
Implementing a testing strategyImplementing a testing strategy
Implementing a testing strategy
 
Why do you need multiple qa environments
Why do you need multiple qa environments Why do you need multiple qa environments
Why do you need multiple qa environments
 
Testing 101
Testing 101Testing 101
Testing 101
 
201008 Software Testing Notes (part 1/2)
201008 Software Testing Notes (part 1/2)201008 Software Testing Notes (part 1/2)
201008 Software Testing Notes (part 1/2)
 
DevOps
DevOpsDevOps
DevOps
 
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docx
30  February 2005  QUEUE rants [email protected] DARNEDTestin.docx30  February 2005  QUEUE rants [email protected] DARNEDTestin.docx
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docx
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Design testabilty
Design testabiltyDesign testabilty
Design testabilty
 
Agile Mumbai 2020 Conference | How to get the best ROI on Your Test Automati...
Agile Mumbai 2020 Conference |  How to get the best ROI on Your Test Automati...Agile Mumbai 2020 Conference |  How to get the best ROI on Your Test Automati...
Agile Mumbai 2020 Conference | How to get the best ROI on Your Test Automati...
 
Best Practices for Applications Performance Testing
Best Practices for Applications Performance TestingBest Practices for Applications Performance Testing
Best Practices for Applications Performance Testing
 
Interview questions and answers for quality assurance
Interview questions and answers for quality assuranceInterview questions and answers for quality assurance
Interview questions and answers for quality assurance
 
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...
 
Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Estimating test effort part 1 of 2
Estimating test effort part 1 of 2
 
Bridging the communication gap
Bridging the communication gapBridging the communication gap
Bridging the communication gap
 
From Monoliths to Microservices at Realestate.com.au
From Monoliths to Microservices at Realestate.com.auFrom Monoliths to Microservices at Realestate.com.au
From Monoliths to Microservices at Realestate.com.au
 

More from Fabio Simeoni

Featherweight Clients (Athens, 2012)
Featherweight Clients (Athens, 2012)Featherweight Clients (Athens, 2012)
Featherweight Clients (Athens, 2012)Fabio Simeoni
 
My Container (Sophia, 2011)
My Container (Sophia, 2011)My Container (Sophia, 2011)
My Container (Sophia, 2011)Fabio Simeoni
 
Client Libraries (Rodhes, 2011)
Client Libraries (Rodhes, 2011)Client Libraries (Rodhes, 2011)
Client Libraries (Rodhes, 2011)Fabio Simeoni
 
The Virtual Repository
The Virtual RepositoryThe Virtual Repository
The Virtual RepositoryFabio Simeoni
 
the-hitchhiker-s-guide-to-testing
the-hitchhiker-s-guide-to-testingthe-hitchhiker-s-guide-to-testing
the-hitchhiker-s-guide-to-testingFabio Simeoni
 
a-strategy-for-continuous-delivery
a-strategy-for-continuous-deliverya-strategy-for-continuous-delivery
a-strategy-for-continuous-deliveryFabio Simeoni
 

More from Fabio Simeoni (10)

Smartgears
SmartgearsSmartgears
Smartgears
 
Featherweight Clients (Athens, 2012)
Featherweight Clients (Athens, 2012)Featherweight Clients (Athens, 2012)
Featherweight Clients (Athens, 2012)
 
My Container (Sophia, 2011)
My Container (Sophia, 2011)My Container (Sophia, 2011)
My Container (Sophia, 2011)
 
Project Apash
Project ApashProject Apash
Project Apash
 
Client Libraries (Rodhes, 2011)
Client Libraries (Rodhes, 2011)Client Libraries (Rodhes, 2011)
Client Libraries (Rodhes, 2011)
 
The Virtual Repository
The Virtual RepositoryThe Virtual Repository
The Virtual Repository
 
Hello Cotrix
Hello CotrixHello Cotrix
Hello Cotrix
 
the-hitchhiker-s-guide-to-testing
the-hitchhiker-s-guide-to-testingthe-hitchhiker-s-guide-to-testing
the-hitchhiker-s-guide-to-testing
 
a-strategy-for-continuous-delivery
a-strategy-for-continuous-deliverya-strategy-for-continuous-delivery
a-strategy-for-continuous-delivery
 
Grade@cnr
Grade@cnrGrade@cnr
Grade@cnr
 

Recently uploaded

HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comFatema Valibhai
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxComplianceQuest1
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...Health
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceanilsa9823
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionSolGuruz
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...panagenda
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdfWave PLM
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️Delhi Call girls
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsAndolasoft Inc
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️anilsa9823
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVshikhaohhpro
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...kellynguyen01
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...MyIntelliSource, Inc.
 

Recently uploaded (20)

HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
 

Technical Report: My Container

  • 1. Introducing MyContainer In-Container Testing for gCube Services I am going to motivate and present tools to test service code in two classic scenarios: • manual testing : i.e. in the tight, implement<->debug cycle of coding sessions  • automated testing: i.e. during local or remote build processes, whether pre-commit builds, nightly builds, or the continuous integration builds of a good future In particular, I will illustrate the advantages that these tools offer over existing practices. Context My overall impression is that we currently pay lip service to testing. It seems to me that we do so: • across the entire spectrum of grains, scopes, and forms which may be associated with the concept • in spite of a decade of pragmatic and analytic evidence that testing is in fact highly beneficial, not just to functional correctness but also to design quality. To start at the beginning, unit testing seems to be scarcely practiced in the project. It seems so even if we stick with the classic view of testing (validate existing implementations) and otherwise decide to ignore more "agile" perspectives (design and implement to pass an existing test, TDD).  Some unit tests do exist buried in our components, but I believe these are not executed in our nightly builds, which is the only form of automation we have adopted so far for code integration. This alone sets us quite a long way off the practices continuous integration, which takes testing to be an integral part of any build process. As far as development practices go, one may say that we are at least 10 years behind the state of the art. Some Reasons The blame is partly on our technology stack, including gCore, which does not promote component isolation and thus inhibits standard testing techniques (e.g. mocking).  We can overcome these problems through careful design and isolate the stack as piece of legacy technology.  However, doing so is not always easy even when it is possible. Most importantly, I believe it has not been done so far. This tells us that, as a project, we are still to  acknowledge the importance of systematic, reproducible, and automated testing. I am a living proof that pursing testing in design, practices, and tools requires "education", not to emphasise that I am educated but to admit that I was utterly ignorant until recently.
  • 2. In-Container Testing There are gCube components in which unit testing is much easier to practice, such as libraries and plugins. I believe that we should pursue it aggressively in these cases, as we have started doing for Content Management for a year or so. Yet, the bulk of our code runs inside a container and has remote clients. If enabling unit testing of service code would give us precious feedback, we have an even stronger need for integration testing.  The notion of integration testing covers a wide spectrum beyond unit testing, from integration of service components to integration of gCube services over a wide-area network.  Our need is particularly stronger just one step beyond the scope of unit testing. Given our choice of technologies, this extra step brings us to what people often refer to as in-container testing, where tests exercise the functional and non-functional features of a target service which is deployed within a target container.  In our case, in-container testing is the first chance we get to test the integration between the individual components of a service (if we used other technologies, such as Spring, there would be earlier opportunities, but we do not use them and those opportunities do not materialise) . This is also where we test integration between service components and client components and, most crucially, between service components and container components,  particularly gCore components. Are interfaces supportive of functional requirements? Do client requests serialise and deserialise correctly? Does the service manage its state correctly? Does it produce expected outputs? Does it publish and retrieve information from the Information System as expected? Does it work correctly in multiple scopes? How does it behave under concurrent load and with large payloads? etc...  System Testing The questions above are among the first ones, both functional and non-functional, that we try to answer in the process of developing new services or modifying existing services. Sometimes these are also the only questions. Often, however, our services have runtime dependencies that: • cannot be satisfied within a single container (external dependencies) • are not the ubiquitous ones towards to the Information System (publication and discovery). In this case, integration testing may take a broader scope and a coarser grain,  requiring the staging of multiple services in multiple containers. This is system testing and we approach it in cooperatively managed development infrastructures, such as devNext.  Unsurprisingly, it has been noted that this solution has proven inadequate in ensuring a stable and reliable testing environment. It has been noted that a development infrastructure, at times more than others, approximate wilderness. It has also been noted that improving over this solution is of key importance for our future.  Yet I believe that improving in-container testing is even more crucial, for two reasons. 
  • 3. • if we can discover bugs or design inefficiencies in the first testing environment that allows us to observe them  (sure, earlier for some services that for others), then less bugs will need to be found in the wilderness; releases will be faster and less painfully staged. Overall, less bugs will risk to make it to production. • most importantly, I believe   improving in-container testing is a necessary step towards better solutions for system testing . I will not speculate today on what these solutions may be, as it would be premature. However, some ideas - not entirely new in fact - are starting to emerge precisely as generalisations of early experience with in-container testing. Status Quo So, what is to improve upon when it comes to current practices for in-container testing?  What do we do today to test services that run inside containers?  Practice and mileage may vary, but I think it is safe to assume that most of us rely on so-called test clients.  We first build and deploy the service code in some target container, then we launch the clients and observe the outcomes.  We then correct/evolve the clients in response to failures/changing requirements, and relaunch the tests (one hopes!).  That is pretty much it. Looking at the problems Though quickly described, this approach to testing is complex and inherently manual. In particular, it results in tests that:  • are not repeatable • are hard to share within a team • execute more slowly and less frequently than they should • are never executed within build processes. It seems to me that the root problem here is that tests and container have different lifetimes and are managed in different environments. They have different lifetimes in that, to retain some sanity, we tend to use containers that are dedicated neither to the test/test-suite nor to the service targeted by those tests. This means that the state of the container may change across runs of the same test; the environment in which it runs may change and so may its configuration.  With our containers in particular, libraries may come and go freely at the rhythm of deployments and un-deployments. As a result, a test that works today may fail tomorrow on the same machine without any intervening change to the service code or the test.   Within teams, these reproducibility problems can be observed even more across different machines, i.e. across space as well as over time.  And sharing an installation of the container creates its own problems, to do with distributed management and poorer and slower working environments.
  • 4. Sharing the tests is complicated in itself. These depend on the physical location of the container, its endpoint, and the various environment variables or property files that one normally uses to push these contextual dependencies outside test code. Often undocumented, these contextual dependencies result in test clients which are understood and thus executed only by their author. This means that they are executed too late with respect to code changes that have been applied and are better understood by other team members. Reproducibility and sharing issues aside, the separation between tests and container makes for containers that contain more deployments than the tests actually need (local services, globus notification services, etc).  Startup and execution times become longer and coding sessions slower.  A lot of our time goes also in managing the container's lifetime, i.e. starting and stopping the container before and after the test. In most cases, we do this from the console, in an environment other than the IDE in which we author test and production code. If we do manage the container's lifetime from within the IDE, we end up creating the same kind of synchronisation problems for the team which we have already discussed for the test clients. What is probably most time-consuming for us is having to go through build-and-deploy cycles at each and every change in the code. Testing a one-line change in service code tends to take tens of seconds rather than milliseconds. All these inefficiencies  push developers to seek testing feedback less incrementally than they should. The later the feedback the harder it is to pin down problems and sort them out. Notice that all the problem above become worse if  containers are fully CONNECTED to a development infrastructure, regardless of actual test requirements. This effectively means that we do system testing even when we could do in-container testing, i.e. in a scope which is considerably more complex to control. I suspect not many containers join infrastructures in STANDALONE mode (this is a “stealth mode”: the Information System can be queried but the service leaves no visible traces in the infrastructure; the container can come up and go down without causing disruption, and many runtime activities of the container are avoided, which makes for quicker startup and test execution). Finally, how are we to automate this approach to integration testing?  I do not know how to answer that question, but I suspect that it is very difficult if not impossible. This means that we cannot test the code as part of our local or remote builds, which decreases our chances to catch regressions errors. Our confidence in changes then diminishes and design enters a state of paralysis. Having optimised our development practices over the years may make for problems that occur more occasionally than they could. This does not mean that they do not occur, or do not occur when there is less time to handle them, typically under release . Mostly, it does not mean that we should not put our time in more creative implementation and design activities!
  • 5. Requirements It seems to me that improving over the status quo calls for tighter integration between the tests and the container in which we deploy the service under testing. To address problems of test reproducibility and test performance, we need a container which is entirely dedicated to the service and its tests. Only so we will get a guarantee that, at each test run, the container is configured with no less and no more deployments than are required for that run.  To make this viable and to address problems of development efficiency, test share-ability, and test automation, we need container and tests to share the same execution environment. In other words, we need a container that can be embedded in the tests, i.e. can be configured, started, stopped, and used from within the tests.  If container and tests run the in the same JVM they will “see” the same classpath resources, including service code. This means that changes applied in a coding session from the IDE will be immediately “live” within the container, i.e. we will not need to explicitly build and deploy them before test execution (manually or not, from within the same or other development environment, we just won't). We will still have requirements for explicit deployment and undeployment but these will be limited to resources which should not or cannot be on the classpath, such as: • WSDL interfaces, scripts, and various forms of configuration files that may have changed since the last test execution • libraries and GARs of other services that the test requires to be co-deployed with the target service. Like the container, we need to embed these light-weight deployments and un-deployments in the tests, as pre-conditions and post-conditions to test execution. As an important side-effect of the single JVM assumption, the tests will be able to obtain references to service and gCore components as these run in the container, including port-type implementations, service contexts, resource homes, the GHNContext, resource serialisations, etc).  This means that we will be able to exercise not only client-driven tests but also service- side test. We will be able to make assertions on the state of those components and on the state of the container within the tests. This potential will lead to service designs that are more testable than they are now. We will be encouraged to design our service components so that they can be injected mock dependencies during the execution of the tests. In other words, we will be able to lift well-known techniques for unit testing in the context of integration testing, circumventing the obstacles to unit testing that I have discussed above. My Container Over the past couple of months we have worked towards meeting the requirements above against our current technologies.
  • 6. The idea was to produce tools that supported an embedding of Globus and simplified its use for in-container testing. Since we wanted to ultimately deliver a dedicated and friendly container, we initially code-named the project Pasqualino. Eventually, we settled on a slightly more neutral name, my-container. The first difficulty for us was that, true to its age, the Globus container wasn't born to be easily embedded. One way in which Globus customises Axis is by hard-choosing its file-based initialisation mode. The container comes up and seeks evidence of service deployments in distinguished folders on file system. This means that an embedded Globus container needs nonetheless a physical installation, i.e. cannot exist purely in memory. The best that we could do was to target a minimal installation of the container: my-container distributes in a tar-ball of about 50Kb and expands in 0.5MB of disk space. It does not contain a single script, pre-deployed service, or in fact library. It contains the necessary support for embedding deployment (see later), and configuration to start up on localhost:9999, in DISCONNECTED mode for a number of known infrastructures, devNext by default. Differently from standard container distributions, it also embeds the the storage of service state, for convenience of post-test inspection during coding sessions. With this distribution, my-container discourages deployments and startups which are not defined in code, and it requires explicit control of the classpath. empty embedded storage build support
  • 7. The distribution of my-container is built in Etics, where it is available for manual download. Underneath however, it uses Maven as the build system and is published at least every night in our Nexus repository at http://maven.research-infrastructures.eu/nexus We can manually download my-container and install it as a test resource of our components, excluding it from version control. Even better, we can automate the download and installation of my-container during the build of our components. As I will show later, we can achieve this automation for our standard, Ant-based components. We can equally achieve it for the new breed of Maven-based components that are we are slowly integrating within gCube in a parallel line of work. Runtime Library The distribution of my-container satisfies Globus requirements for a physical installation. Next, we needed to offer support to control it from within the tests. This support is provided by a dedicated library, which we refer to as the runtime library of my-container. Like the distribution, the library is built every night in Etics and, as a Maven-based component, is available in our Nexus repository. We can download the library and embed it in our components as a test library. We can submit it to version control, or keep it outside our projects and explicitly depend on it for Etics builds. While no decision has been made yet, it is possible that future versions of gCore may embed it, as much as it now embeds Ant and JUnit ibraries. The runtime library supports two modes of interaction with my-container:
  • 8. • a low-level mode whereby we interact directly with my-container through an API • a high-level mode whereby we use annotations and JUnit4 extensions to delegate interactions with my-container The high-level mode is recommended, as it makes test code simpler to write and read. The low- level mode can be used for use cases which are not covered by the high-level mode. The choice does not need to be exclusive, as the two modes can be combined within a single test or test suite. We start from the beginning, looking at the low-level mode first. The low-level API The basic facility that we find in the runtime library is MyContainer, an abstraction over the local installation of my-container which we use to interact with the container from within our tests. The standard usage pattern is as follows: • create an instance of MyContainer • invoke the method start() on it • write the test code proper, interacting with the instance if required by the test • invoke the method stop() on it In the first step we identify the local installation of my-container and get a chance to configure the container for the test, including the service or services that we wish to deploy in it for testing purposes. In the second step we block until the container reaches: • the state CERTIFIED • the state DOWN or FAILED • none of the states above within a configurable timeout In the first case start() returns successfully and the test can progress further, in the latter two start() raises an exception that fails the test. In the third step, we write test code that will run “in” the container, i.e. in the same runtime that we expect for service code. We can access the GHNContext to inspect its state, deploy some plugins, register some listeners, etc. We can also access the port-types, contexts, homes, etc. of the services that we have deployed in the container. We can then get to the usual testing business, i.e. make assertions about the the state of these components and verify the occurrence of expected interactions. With the final step, we stop the container and perform some required cleanup. Consider the simplest of examples: MyContainer container = new MyContainer(); container.start(); container.stop(); Since we instantiate MyContainer without any configuration, the container will start up with defaults: • the installation will be expected in a directory my-container under the working directory
  • 9. • the container will start on port 9999 • no service will be deployed in it • the startup timeout will be 10 seconds Since we also specify no testing code proper, the container will be stopped as soon as it reaches an operational state (just READY in this case, as there are no deployments that require certification). Deployments While the three lines above may serve as a “smoke test” for gCore itself, they are of little use for service testing. To test a service, we need to be able to deploy it into my-container before start up. And to do so from within test code, we need to model the the deployment unit in Globus, the Grid Archive. The runtime library includes the Gar class for this purpose. We can create a Gar instance and point it to all the project resources that we wish to deploy in my-container, from Wsdls and s u p p o r t i n g XML Schemas , t o c o n fi g u r a t i o n fi l e s (JNDI ,WSDD, profile.xml, registration.xml,...), to libraries. Assuming our standard project layout, for example, we can assemble a Gar of the service under development as follows: Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc"); Here we use the builder API of the Gar class to mimic in code what we normally do with dedicated Ant targets during the build of our service. Differently from those target, however, the programmatic construction of the archive is independent of any project layout or underlying build system. We add project resources by providing their paths relative to the project root (e.g. schema and etc). With a project layout that aligns with Maven conventions, for example, we may write instead: Gar myGar = new Gar("my-service").addInterfaces("src/main/wsdl"). addConfigurations("src/main/resources/META-INF"); Before we get to actually deploying the Gar, there are few things to notice: • we provide a name for the Gar under which its resources will be deployed (e.g. my-service). As usual, this name must be consistent with relative paths to deployed Wsdls that we specify in the deployment descriptor of the service. Our standard Ant buildfiles use package names for the purpose and our deployment reflect this convention. We would then create Gar instances accordingly, e.g. new Gar(“org.acme.sample”)..... • we provided relative paths to whole directories of resources. This is a convenient way to add resources en-masse to the Gar, which matches well our current project layouts. If need arises, however, we can point to individual resources using methods such as addInterface() and addConfiguration(), which expect relative paths to individual files (e.g. addInterface(“config/profile.xml”)). This supports non-conventional project layouts. Equally, it allows us to “override” some of the standard resources, for exploratory programming (e.g. to test the service with a non-standard profile). In these use-cases, we can first add
  • 10. directories of standard resources and then add dedicated test resources that override some of the standard ones. • we have not added libraries to the Gar. This is because the service code is expected to be already on the classpath, including generated stub classes (as usual, these are placed on the classpath after previous building steps). As we discussed above, this code is immediately “live” in my-cointainer. In some cases, however, the tests may require the deployment of libraries that are not on the classpath (e.g. service plugin implementations). In these use-cases, we can use the methods addLibrary() and addLibraries() to add these runtime libraries to the Gar. Once we have assembled the Gar for deployment, we can pass it to the constructor of MyContainer. When we invoke start(), the Gar is deployed in my-container before the container is actually started. For example: Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc"); MyContainer container = new MyContainer(myGar); container.start(); //gar deployed at this point Deployment goes through the steps we normally observe during a build. Underneath, in fact, MyContainer invokes programmatically the same Ant buildfiles which are normally found in full- blown container installations and which are retained in my-container (this ensures consistency with external build processes). Wsdls will be extended with Globus providers and binding details, deployment descriptors and JNDI files will re-named and post-processed for expression filtering (e.g. @config.dir), resources will be placed in the container in the usual places (e.g. share/ schema/my-service, lib, etc/my-service), undeployment scripts will be generated (undeploy.xml), ... Accordingly, we get a first form of feedback about the service under development, even before we’ve exercised any piece of service functionality. If the container starts up then our service has deployed correctly, otherwise we have made some mistake in the configuration of the service which we rectify straight away. Notice that we can deploy many Gars at once and there is no minimum requirement for what we put in each individual Gar. This may be useful when we need to deploy auxiliary libraries, as we can assemble the standard Gar for the service, a separate Gar for the auxiliary libraries, and then deploy the two Gars together, e.g: Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc"); Gar auxGar = new Gar("my-plugin").addLibrary("test/my-plugin.jar"); container = new MyContainer(myGar,auxGar); container.start(); We can also construct a Gar instance from an existing archive: Gar existingGar = new Gar("test/somegar.gar"); This is useful if we wish to place our tests outside the service under testing, in a separate module that assumes that the archive of the service has been previously built and is available as a test resource. It is also useful if the test requires the service to be deployed in my-container along with other services, e.g.:
  • 11. Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc"); Gar codeployedGar = new Gar("/test/other-service.gar"); container = new MyContainer(myGar, codeployedGar); container.start(); Port-types and Endpoint References After deployment and container startup, the test proper can begin. For example, a smoke test that should be part of the test-suites of all our services is the following: Gar myGar = ... container = new MyContainer(myGar); container.start(); assert(ServiceContext.getContext().getStatus()=GCUBEServiceContext.Status.READIED); container.stop(); This test inspects the state of the service to ensure that it has come up correctly in my- container. Next, we will want to test the methods of the service API. We can do this in either one of two ways: • by invoking directly the methods of some port-type implementation (internal tests) • using the service stubs, as a client would normally do (external tests) For internal testing, we need access to the implementation of the port-type. As this is instantiated and controlled by Globus, we cannot access it from other service components but need to ask MyContainer for it, e.g.: Stateless pt = container.portType(“acme/sample/stateless”, Stateless.class); ...pt.about(...)... Here we pass the name of the port-type to the container (“acme/sample/stateless”, as it would be specified in the deployment descriptor of the service), along with the class of the instance that we expect back (Stateless). We the invoke directly a method on the port-type that we wish to test (about()). For external testing, we need an endpoint reference to the port-type. Again, we ask MyContainer for one: EndpointReference epr = container.endpoint(“acme/sample/stateless”); StatelessPortType pt = new StatelessServiceAddressingLocator().getStatelessPortTypePort(epr) ...pt.about(...)... Clearly, external tests gives more feedback than internal ones, at the cost of slightly slower execution times (but remember, we are using localhost!). They flag any problems we may have with input and output serialisations. If we experience any such problem during development, we may want to temporarily enable org.apache.axis.utils.tcpmon on a port other than the container’s, so as to inspect the serialisations directly. In this case, we need an endpoint reference configured for the monitored port, e.g. 9000: EndpointReference epr = container.endpoint(“acme/sample/stateless”,9000);
  • 12. Once we have sorted the problem out, we can revert the code to use endpoint references that point to the container’s port, as the we will not have or want the TCP monitor running when the tests are executed non-interactively during build processes. Based on these basic facilities, the precise actions that we take in our tests depend on how we designed the service and on our ingenuity. The possibilities are actually endless. If we need to, we can obtain from MyContainer access to key locations in the container. For example: • configLocation() gives us access to the configuration directory of my-container. This allows us to override key configuration files before we start the container (e.g. add a ServiceMap, deploy a custom GHNConfig.xml file, enable security, etc..) • storageLocation() gives us access to the storage directory of my-container, where we can find the serialisations of any stateful resources that we may have created during the test (e.g. to confirm the creation of such serialisations) Other key locations are also available through MyContainer (location(), libLocation(), deploymentsLocation()), though these are used primarily by MyContainer itself and are unlikely to be targeted by our tests. Overall, we can access in principle any gCore component and service component that may enable us to exercise the intended behaviour of the service under testing. Logging One immediate advantage of running tests and container in the same JVM is that the logs emitted by either are merged in a single log. This give us a full picture of the execution, a picture that can be delivered to the console of our IDE. Building on this potential, the distribution of my-container includes a log4j.properties configuration file, which is loaded up dynamically as soon as we instantiate MyContainer. In it, the loggers used by Globus, Axis, and gCore are configured to append to the console (only warnings in the first two cases). So, we do not need to take any action to find the container’s logs in, say, our Eclipse console:
  • 13. As a further convenience, log4j.properties in my-container also includes configuration for loggers called test. This gives us a configuration-free way to log from within our test code. For example using loggers in test code as exemplified below: private static Logger logger = Logger.getLogger("test"); ... @Test public void someTest() throws Exception { ... logger.info("in test!"); ... } would result in similar logs: [TEST] 14:17:50,086 INFO test [main,main:549] in test Of course, this leaves out all the logs of the service under testing. To include them, we need to place our own log4j.properties the test classpath and follow standard Log4j configuration patterns. For example, if the the service uses loggers called org.acme...., then the configuration could look like the following: log4j.appender.ROOT=org.apache.log4j.ConsoleAppender log4j.appender.ROOT.layout=org.apache.log4j.PatternLayout log4j.appender.ROOT.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p %c{2} [%t,%M:%L] %m%n log4j.rootLogger=WARN,ROOT log4j.appender.ACME=org.apache.log4j.ConsoleAppender log4j.appender.ACME.layout=org.apache.log4j.PatternLayout log4j.appender.ACME.layout.ConversionPattern=[ACME] %d{HH:mm:ss,SSS} %-5p %c{2} [%t,%M:%L] %m%n log4j.category.org.acme=TRACE,ACME log4j.additivity.org.acme=false The end result is that the console will merge logs from my-container, logs from the tests, and logs from the service under testing, while still showing the provenance clearly. My personal experience is that this merging proves extremely useful during debugging. Test Isolation and Execution Performance An important role of MyContainer is to promote the isolation of our tests. To this end, MyContainer takes a number of actions on container startup, all of which are geared to wipe out any form of state that my-container may have accumulated in previous tests. This discourages us from basing some tests on the outcome of other tests, even when we can exert control over the test order. In particular, MyContainer will: • restore the default configuration of the container; • clean the storage directory of any stateful resource serialisation; • undeploy any Gar which is not required by the current test;
  • 14. Notice that these actions are taken before the tests start, rather than once they have completed. There are at least two important justifications for this timing choice. Firstly, during coding sessions, it allows to inspect the state of the container as left at the end of the tests. In particular, we can confirm our expectations as to the deployed resources and the stateful resources that may have been created. Since my-container is installed within the service project, we can easily do so from within our own IDE. Secondly, MyContainer can optimise container start-up by avoiding unnecessary deployments. If the resources in a Gar required by the test have not changed since their last deployment, re- deploying the Gar is happily avoided, as shown in the logs: [CONTAINER] ... INFO mycontainer.MyContainer ... skipping deployment of sample-service because it is unchanged The optimisation is significant, as deployments are easily the most time-consuming operations during the execution of a test, especially when services have multiple port-types and a large number of operations . Without them, my-container will start in less than 3 seconds, true to the promise that an embedded container will make for very efficient interactive testing during coding sessions. To detect change, Gar instances keep track of the time of last modification of their resources. Whenever we add a resource or a directory of resources to the Gar, the resource which has most recently changed provide the the time of last modification of the whole Gar. MyContainer then compares this time with the time in which a Gar with the same name was last deployed, which is the time of last modification of the undeploy.xml file for that Gar. Given MyContainer’s help in terms of test isolation and performance, the actual degree of test isolation is our responsibility. For maximum isolation, we could use different instance of MyContainer in each test. This, however, with its own drawbacks. First, there is a performance issue. While my-container starts quickly, especially when deployments are optimised away, we are nonetheless talking seconds rather than milliseconds. Second, Globus and gCore make heavy use of static variables, and this may reintroduce issues of test isolation, which we wanted to reduce in the first place. I believe we can obtain a good compromise between test isolation and test performance by sharing a single instance of MyContainer across a suite of strictly related tests (e.g. create tests, read tests, write tests, and so on). All the tests we place in the suite above share the same instantiation and configuration of the container. We pay the startup price once, and then execute each test in the suite in milliseconds, i.e. in timings that we’ve come to associate with unit testing (and even if we test service operations externally, through stubs). JUnit Embedding Where are we going to place our test code? We could put it in the main() method of a test client, of course, but the recommended approach is to embed it in a more suitable testing framework, such as JUnit. By doing so, we get a clear structure, proper integration with IDE and build tools, and a host of testing facilities which are standards de facto.
  • 15. One mapping of out testing pattern in JUnit is the following: public class MyTestSuite { static MyContainer container; @BeforeClass public static void startup() { Gar myGar = ... container = new MyContainer(myGar); container.start(); ... } @Test public void someTest() throws Exception {...} @Test public void anotherTest() throws Exception {...} ... @AfterClass public static void shutdown() { container.stop(); ... } } Here, the instance of MyContainer is shared across the tests of a suite, as per the approach recommended above. The static methods annotated with JUnit‘s @BeforeClass and @AfterClass methods are used to start and stop the container, respectively. Methods annotated with JUnit‘s @Test are the individual tests of the suite. Annotation-driven Tests The JUnit skeleton above can be taken as boilerplate code for our test suites with my- container. The runtime library builds on the extension facilities provided by JUnit to spare us this boilerplate and, more generally, to avoid most of the interactions of MyContainer that we have presented so far (creation, deployment, start/stop, obtaining port-type implementations and endpoint references, ..). This is the high-level mode supported by the runtime library. When we work in this mode, we simply annotate the test-suite as follows: @RunWith(MyContainerTestRunner.class) public class MyTestSuite {...} MyContainerTestRunner is a JUnit 4 test runner which replaces the default one to: • create, configure, and start an instance of MyContainer before any other code in the test suite is executed by JUnit • inject into the test-suite any port-type implementation or endpoint reference which we may need • clearly name the output of any test with the name of the test itself • stop the underlying instance of MyContainer after any other code in the test suite is executed by JUnit For example, our skeleton now takes this simpler form:
  • 16. @RunWith(MyContainerTestRunner.class) public class MyTestSuite { @Test public void someTest() throws Exception {...} @Test public void anotherTest() throws Exception {...} ... } This does not mean that we cannot have @BeforeClass and @AfterClass methods, only that we do not need to have them only to start and stop a container. Of course, we still need to be able to provide our Gar/s to the underlying MyContainer. However, we can do so indirectly now, by exposing static fields appropriately typed and annotated. Our test runner will recognise these fields and pass the information they provide on to the instance of MyContainer that the runner handles on our behalf, e.g.: @RunWith(MyContainerTestRunner.class) public class MyTestSuite { @Deployment static Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc"); @Test public void someTest() throws Exception {...} @Test public void anotherTest() throws Exception {...} ... } Here, we have used @Deployment annotation to flag a static field of Gar type to the runner. The runner will use it when it creates the instance of MyContainer. Since we can deploy as many Gars in my-container as we need to, we can have multiple fields annotated with @Deployment and of type Gar in our test-suite. Similarly, we may define static fields for port-types and endpoint references and have the runner set their values for us, e.g: @RunWith(MyContainerTestRunner.class) public class MyTestSuite { @Deployment static Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc"); @Named(“acme/sample/stateless”) static Stateless pt; @Named(“acme/sample/stateless”) static EndpointReference epr; @Test public void someTest() throws Exception {
  • 17. ....pt.about(...)... } @Test public void anotherTest() throws Exception { ... StatelessPortType ptStub = new StatelessServiceAddressingLocator().getStatelessPortTypePort(epr); ... ptStub.about()... } ... } Here we have reused @Named from the standard JSR-330 to require the injection a given port- type implementation and of a endpoint reference for it. The runner will pick on these annotations and set their values accordingly, well before the suite uses them in its test methods. Through the same means, and if so required, the runner can also inject the underlying instance of MyContainer in the test suite, e.g.: @Inject static MyContainer container; where @Inject is also borrowed from JSR-330 to flag requests for (unqualified) value injections. Having the instance of MyContainer within the test suite allows us to combine low-level and high- level modes of interactions within the same test-suite. In particular, we can fallback to the API of MyContainer when our tests need more staging flexibility and sophistication than annotations can achieve. Non-Default Configuration In all the examples above, we’ve relied on defaults for the location of my-container, the port on which it listens for requests, and the startup timeout. However, we may wish to override some of these defaults to have more control on the install location, or to make for shorter or longer startup times or, less commonly, to target a different port (for proxying issues or other regulations). To do this, we can use the other constructors of MyContainer: • MyContainer(String,Gar ... gars) is dedicated to non-default locations, which is the most common scenario for overriding defaults. It remains that the input paths are resolved with respect to the working directory, so as to discourage absolute paths which compromise the reproducibility of tests (e.g. new MyContainer(“/src/main/test/resources”),...); • MyContainer(myProperties,Gar ... gars)is the most generic of all constructors and allows us to configure all the available properties in a Properties object, or only those we care to override. Use the constants in the Utils class to name the properties to be overridden (e.g. for example, Utils.STARTUP_TIMEOUT_PROPERTY). Finally, note that all MyContainer constructors, including the no-arg constructor, will try to complement the configuration properties that are implicitly or explicitly provided in code with those that may be found on the classpath in a file called my-container.properties.
  • 18. For obvious reasons, pushing non-default configuration in one such file is preferred over hard- coding it in test code. This is particularly the case when we work with the annotations discussed above, as the test runner will alway create a MyContainer instance through its no-arg constructor. The property file thus allows us to override the defaults without renouncing to the high- level mode of interaction with the runtime library. Test Automation Controlling my-container and the deployment process along the lines illustrated so far satisfies the requirement for an efficient test and debug model during interactive coding sessions, typically from within the IDE. Equally, it delivers on the promise for increased share-ability and reproducibility of tests. In turn, this creates the basis for test automation, i.e. the possibility of executing our tests during local or remote build processes. As we have already emphasised, test automation is key to the development process and is one of the main goals behind the work on my-container. Given the facilities of the runtime library, automating the tests is matter of build configuration. As such is is rather sensitive to the build system that we use, be it Ant, Maven, or other. In all cases, however, we are after the possibility to: • automatically download the distribution of my-container from a remote repository, and install it in the project prior to launching the tests. Since MyContainer gives us good test isolation, we want this to happen only if previous builds have not done it already; • trigger test compilation and execution straight after compilation of service code, including generates stub classes, with the implication ought to fail whenever a test does not pass. Ant Automation Let’s first see how we may achieve this automation within our standard Ant buildfiles. Our default buildfiles have roughly the following target structure (up to target names): This structure focuses on the independent generation of two types of build artifacts: gar package process WSDLs compile init deploy stubs compile Stubs generate Stubs deploy Stubs
  • 19. • a Gar archive which packages service binaries, configuration, and Wsdl interfaces • a Jar archive with binaries of stub code generated from Wsdl interfaces Since we do not need to test generated code, we introduce testing only in the process of generating the Gar archive. (As usual, an up-to-date stubs Jar must be on the test classpath for both internal and external testing). One way of doing this leads to this modified task structure: We have interposed test execution (test) between the compilation and packaging of service code (existing targets package, compile), i.e. as soon as possible. Executing the tests requires the compilation of the tests (compileTests) and the installation and download of my-container (install-my-container, download-my-container). Of course, compiling the suites requires compiling the service code first (existing target compile). Finally, the installation of my- container can be removed at any point (remove-my-container) and the configuration of most tasks is centralised in initTest. An XML serialisation of this structure may look as follow: <!-- run test suites --> <target name="test" depends="compileTests,install-my-container" unless="test.skip"> <!-- compile test suites --> <target name="compileTests" depends="compile, initTests" unless="test.skip" > <!-- install my-container --> <target name="install-my-container" depends="initTest" unless="test.skip" > <!-- download my-container if not installed --> <target name="download-my-container" depends="initTest" unless="my-container.installed" > gar package process WSDLs init deploy compile test compile Tests install my−container init Tests download my−container uninstall my−container
  • 20. <!-- uninstall my-container--> <target name="uninstall-my-container" depends="initTest"> <!-- target package service code --> <target name="package" depends="test">...</target> Notice that task dependencies are organised in such a way to minimise build time in case of failures; e.g. when the service fails to compile the tests are not compiled, and when the test fail to compile, my-container is not downloaded or installed. Notice also that we can disable all the test-related targets on demand, by setting the test.skip variable: .../sample-service> ant -Dtest.skip=true We could have taken the opposite route here and decided to enable test-related targets on demand, using something like if=”test.do” on the test-related targets in place of unless=”test.skip”. The choice depends pretty much on the discipline that we want to impose upon ourselves. With the target structure in place, let us look at the individual tasks, in order of their execution: <target name="initTest" unless="test.skip"> <!-- my-container installation and download directories --> <property name="my-container.install.dir" value="${basedir}" /> <property name="my-container.download.dir" value="${my-container.install.dir}/.my-container" /> <property name="my-container.dir" value="${my-container.install.dir}/my-container" /> <!-- test source directory --> <property name="test.src.dir" value="test" /> <!-- test library directory --> <property name="test.lib.dir" vdoalue="test-lib" /> <!-- test binary directory --> <property name="build.tests.class.dir" location="${build.dir}/test-classes" /> <!-- test reports --> <property name="test.reports.dir" value="${build.dir}/test-reports" /> </target> In initTest we specify the key locations for testing: • where my-container should be downloaded and where it should be installed. For installation, we choose the project root, where it will be automatically discovered by MyContainer without the immediate need to define my-container.properties or to pass installation paths to MyContainer constructors. Keeping the installation outside the build.dir avoids us to re- download my-container after each cleanup. For similar reasons we also download my- container under project root too but we choose a directory that stays hidden in IDEs. Notice that install and donwload directories should added to the svn:ignore list at the commit time; • where are the test sources and where are test libraries; • where the test classes and test reports out to be written out. Since these outputs are transients we place them under build.dir, so as to have them removed at each cleanup.
  • 21. Next, we move to the management of my-container: <target name="install-my-container" depends="initTest" unless="test.skip"> <available file="${my-container.dir}" property="my-container.installed" /> <antcall target="install-my-container" /> </target> <target name="download-my-container" depends="initTest" unless="my-container.installed"> <mkdir dir="${my-container.download.dir}" /> <get src="http://maven.research-infrastructures.eu/nexus/service/ local/artifact/maven/redirect?r=gcube-releases&amp;g=org.gcube.tools&amp;a=my- container&amp;v=RELEASE&amp;e=tar.gz&amp;c=distro" dest="${my-container.download.dir}/my-container.tar.gz" usetimestamp="true" /> <gunzip src="${my-container.download.dir}/my-container.tar.gz" dest="${my-container.download.dir}" /> <untar src="${my-container.download.dir}/my-container.tar" dest="${basedir}" /> <target name="uninstall-my-container" depends="initTest"> <delete dir="${my-container.dir}" /> <delete dir="${my-container.download.dir}" /> </target> In install-my-container we delegate to download-my-container indicating wether an installation already exists or not. If it does not exist already, download-my-container fetches the latest release of the from our my-container Nexus repository and unpacks it. uninstall- my-container cleans up installation and downloads. Now we move to compiling the tests: <target name="compileTests" depends="compile,initTest" unless="test.skip"> <mkdir dir="${build.tests.class.dir}" /> <path id="test.classpath"> <path refid="service.classpath" /> <fileset dir="${test.lib.dir}"> <include name="*.jar" /> </fileset> <pathelement location="${build.class.dir}" /> <pathelement location="${build.tests.class.dir}" /> </path> <javac srcdir="${test.src.dir}" destdir="${build.tests.class.dir}" classpathref="test.classpath" includeantruntime="false" /> </target> Compilation occurs in a classpath that adds the test libraries, the service binaries, and the test binaries to the classpath already used to compile service code. Here we use a reference to another path (service.classpath), though existing buildfiles may not name the service classpath explicitly (use copy and paste then!).
  • 22. What test libraries should be available? At the very least, a version of the runtime library of my- container. Since we will want to run JUnit 4 tests, we will also need ant-junit.jar, as it is included in any installation of Ant from 1.7.1 onwards (older version will not work). On the other hand, we do not need to worry about JUnit binaries, which are bundled in a full-distribution of the container. Of course, any other test utility, framework (e.g. mock libraries), or dependency that we may be using in the tests goes in test.lib.dir. Finally we get to test execution: <target name="test" depends="compileTests,install-my-container" unless="test.skip"> <mkdir dir="${test.reports.dir}" /> <junit printsummary="yes" haltonfailure="true" fork="yes" dir="${basedir}" includeantruntime="false"> <classpath> <pathelement location="${test.src.dir}" /> <path refid="test.classpath" /> </classpath> <formatter type="brief"/> <!-- usefile="false" to get logs in console --> <batchtest toDir="${test.reports.dir}"> <fileset dir="${test.src.dir}"> <include name="**/*Test.java" /> <include name="**/*Tests.java" /> </fileset> </batchtest> </junit> </target> We execute the test in a separate JVM and against a classpath entirely under our control. In particular, we do not use the local Ant runtime (which may vary) and prefer instead the Ant support included in our standard container distribution. We add the test sources here, so as to pick on all the resources that may have been placed there to be loaded by the tests (including mycontainer.properties, log4j.properties, ...). And that’s it. Launching this buildfile from console or from within the IDE will show us that, whenever we do not explicitly disable it, the execution of our test suites has become integral part of our builds. This will help us confirm that we have not introduced regression errors, as we re-factor the code, before we commit the changes, and Etics integrates it in gCube every night.