View stunning SlideShares in full-screen with the new iOS app!Introducing SlideShare for AndroidExplore all your favorite topics in the SlideShare appGet the SlideShare app to Save for Later — even offline
View stunning SlideShares in full-screen with the new Android app!View stunning SlideShares in full-screen with the new iOS app!
Automated Software Testing
by: Eli Janssen
1. What is automated testing?
Automated testing is, much like the name implies, getting the computer to do the
remedial work of ensuring that inputs yield expected outputs. This is often not as easy as
Automated testing can be broken down into two main pieces: Driving the program
and Validating the results .
1.1 Driving your program
This involves how the testing program activates the program to be tested. If you want
to test what happens when you push a certain button, you have to have some way of
pushing that button.
There are three common ways of doing this.
• Directly call the internal API that handles the button click even.
• Override the system and programmatically move the mouse to a set of screen
coordinates, then send a click event.
• Testing specific code that is inserted into the program by the test suite
These three mostly apply to GUI testing, as command line applications are often
easier to test. A set of inputs can often be piped into the CLI program, if the program is
1.1.1 Direct API call
Calling an API from your code is easy, but it does not really test the GUI elements of
your program. Users will generally not be using the APIs when they interact with the
program, they will be using the GUI. Then, the GUI should be tested.
This type of testing is often used for Unit testing early in the product development.
1.1.2 Mouse Macros—Syste m override
Simulating the mouse with “ mouse even recording macros” a re seldom reliable. The
issue here is that the windows might not be in exactly the same position every time a
program is started, or a new window is fired. This also does not test whether or not the
window is maximized, minimized, moved, or resized, the screen resolution, and more.
All of these issues may have an effect on how the GUI performs.
There are some tricks to getting around these issues. The test suite can always be run
at the same resolution, with no other applications running. The location of windows
opening can be hard coded into the GUI, or relative positioning can be used.
The benefit of this “D riving” methodology is that you are actually testing the GUI.
The downside is that it is a lot of work, and many things can go wrong to mess up your
An alternate to mouse action recording is using keycombinations. If keyboard evens
can be used to drive the GUI, then these are often easier to automate. There is no need for
trying to deal with positioning issues that mouse events entail. The downside here is that
if the program is not normally interacted heavily with the keyboard, then the test suite is
not testing actually expected system use. If the mouse is to be used by the users
predominantly, then it should be tested as such.
The other case relies on a combination of things. Some test suite programs put their
own specific external method invocations into your project code. These invocations add
extra information that the test suite uses. This is a combination of the above elements.
Mouse macros are recorded, but window positioning information is passed to the test
suite via the suite added code, among other things. This is a kind of combination of the
previous two methods. This is likely the best overall method.
1.2 Results verification
After the tests have been run, there has to be some method to determine whether or
not the tests passed or failed. There are three main ways to do this: Assumption, Human
based, or a machine comparison tool.
These generally apply to GUI testing, as once again, CLI programs are much easier
to test. The set of inputs and outputs in a CLI program can be piped to a file, and can be
programmatically compared with various textual comparison programs, like diff.
This involves making assumptions as to what the expected output should be, and
comparing based on that. Once example was a spell checker. The author of the automated
test said, “.. when I was writing automation for the spelling engine in Visio I wrote a test
that typed some misspelled text into a shape: 'te . This should get auto corrected to 'the
hard to programmatically check if 'th'
e was correctly rendered on the screen. Instead I
went and asked the shape for the text inside it and just did a string compare with the
expected result. ”
There are some problems with this methodology. The assumptions are that the
program redraws upon spelling correction. It may be that the spelling was fixed in the
object information, but it may not have updated the screen for the user. The key here is to
define your scope very carefully. It may not have been important for the author to be
testing the redraw functionality in this case. If his scope was narrow enough to have only
specified a test of the correction feature in the object itself, there would be no problem
with his test.
1.2.2 Human based
This method relies on human interaction to perform the final test pass/fail
verification. This usually involves screenshots being taken at specified intervals during
the test run. These screenshots are then saved off for later human review.
The benefit here is the time saved clicking buttons. The downside is that someone
has to manually review the images to see if a pass/fail occurred. This is not only tedious,
but after viewing many images, a human may become bored and may miss some critical
issue that might not be apparent at first glance.
1.2.3 Machine Comparison tool
This method, when dealing with GUIs, takes screenshots like the human based
comparison method, but uses programmatic comparisons. The suite compares the
screenshots to a known correct “m aster” s et of images. The suite avoids the problems
associated with mouse macro testing (screen position, size, etc.) by capturing only the
active portion of the GUI. This captures only images pertaining to the application'
canvas, and not parts of the desktop that may change and are not involved in the test.
Once the programmatic comparison is performed, only the results are sent (or stored
in a reporting database) to the tester(s). This drastically reduces the work load on the
testers, saving them from having to push buttons and look at screenshots, but raises the
efficiency of the testing process. It is less likely that a visual comparison tool would miss
something that differed from the “ master” im age set.
This type of programmatic comparison is also used in CLI test comparison tools.
Often it is in the form of a diff comparison on test output with known correct output.
2. Issues with automation
Automated testing can be extremely useful. The amount of testing that can be done
using automated tools is far and away above what can be achieved by manual testing.
Some firms claim that, “ It would take a manual tester four months of work to product the
results that we produce every single night with automated testing. ”
Many find that automated tools are only viable when they are developed “ inhouse” ,
to meet the demand of a specific application/need. Using commercial tools is expensive,
and there is only a real payoff if those tools/test scripts can easily be reused frequently.
Another issue with automated testing tools is that they often require the manipulation
of scripts specific to their environment to drive the tests. These scripts are basically just
languages for programs (test suites) that drive other programs. The development of these
scripts often pose problems for QA personnel. Not all QA personnel are programmers in
their own right. Add onto that the time it takes to write these scripts, and ensure that the
scripts are bug free, that they are more modular and can be used in more tests than just
the current one, and you have a mounting “ time” cost. This coupled with the view oft
held by management that “i f you are not coding, you are not working” , can become a
hindrance and actually reduce the effectiveness of the QA process.
Most of the commercial automated testing suites are not cheap. IBM's Rationl
Robot, for example, costs over $4,000 for a singe seat. There are some opensource
software testing programs, and those are free as in Libre (not beer).
Automated testing adds some unique issues to general QA investment. Being an
engineered, coded, and documented product, automated testing requires additional costs.
Both upfront costs of purchasing and training employees, as well as maintenance costs
for the tool sets, must all be considered.
3. Test automation IS software development
There are some extra things to think about when considering automated testing. First,
a test automation strategy is very important. It is very similar to the regular software
development cycle. Documentation is key, and along with that, requirements and scope
In addition to a need of good documentation, there is a need for coding skill. Many
statistics point to there being almost an equal amount of code written for automated
testing a project, as for the project code itself in some instances.
Testing automation is often thought of solely as testing the entire application, and
GUIs often spring to mind. This is not entirely true for all cases. Much automated testing
is done at the integration, and even unit testing levels. In fact, the sooner testing can be
done the better. This is even more so applicable to automated testing.
Modularity in automated testing scripts is important if the scripts are to have a
4. When Automated Testing Can Go Bad
There are some instances where automated testing poses additional problems, and is
almost always doomed to failure. The first is spare time automation. People are allowed,
or only have time, to work on the automation as back burner projects, or when time
allows. This not only lowers the interest in the project, but if anything does get turned
out, it will likely be of poor quality since it was not made a priority.
A lack of clear goals can further impact automated testing. What is to be expected of
the automated tests needs to be as clear and laid out as any requirements document for a
programming project. Indeed, automated testing may require a good deal of programming
to get it working correctly.
High turnover also poses a problem. If there is not a dedication in the QA staff to
make the most of automated testing tools, and if only one or two people ever work on
utilizing those tools, then there is a high probability that when they leave, it will be
difficult for others to use the tests that they have developed.
And finally, often automated testing is looked at as a panacea to the QA process. It is
a lot of work, and requires careful planning to be successful. Automated testing is much
harder than manual testing. It actually makes the effort more complex since there's now
another added software development effort .
Some tests lend themselves to automation, some do not. Care must be taken to
discern which ones do, and to come up with a well structured test plan and automated
Despite these issues, automated testing is proving to be a great asset to many
development firms and QA divisions.
Automated testing is allowing many companies to do more thorough testing of their
products. This is in line with many software development paradigms, such as Xtreme
Programming, and others, which call for testing at many steps along the development
cycle, not just the end.
Automated testing also allows developers something to build towards. As automated
test scripts are also designed from the requirements specifications, and reflect a very user
centric view of the product, it can be a great asset to product cohesiveness.
Remember, wear your user hat!
 Dickens, Charles. “So ftware Test Engineering.” Microsoft MSDN Articles.
 Earis, Alan. “A re automated test tools for real?” Application Development Trends.
May, 2004. http://www.adtmag.com/article.asp?id=9307
 Kerry. “ Automated Software Testing A Perspective”
 Zallar, Kerry. “ Are you ready for the Test Automation Game?” Software Quality
Engineering. Nov/Dec 2001. Vol. 3, Issue 6.