SlideShare a Scribd company logo
1 of 14
Download to read offline
test 
You Can't Evaluate a Test Tool by Reading a Data Sheet 
All data sheets look virtually alike. The buzzwords is the 
same: "Industry Leader", "Unique Technology", 
"Automated Testing", and "Advanced Techniques". The 
screen shots offer a similar experience: "Bar Charts", 
"Flow Charts", "HTML reports" and "Status percentages". 
It is mind numbing. 
What is Software Testing? 
All people who have done software testing recognize that testing comes in many flavors. For 
simplicity, we'll use three terms on this paper: 
System Testing 
Integration Testing 
Unit Testing 
Everyone does some quantity of system testing where they actually do some with the same things 
from it that the users will do by it. Notice that we said "some" rather than "all." One with the most 
common factors behind applications being fielded with bugs is the fact that unexpected, and for that 
reason untested, combinations of inputs are encountered through the application much more the 
field. 
Not as numerous folks do integration testing, as well as fewer do unit testing. If you have done 
integration or unit testing, you may be painfully aware of the volume of test code that has got to be 
generated to isolate just one file or gang of files in the rest in the application. At the most stringent 
levels of testing, it is not uncommon for the quantity of test code written being larger than the level 
of application code being tested. As a result, these amounts of testing are likely to be applied to 
mission and safety critical applications in markets for example aviation, medical device, and railway. 
What Does "Automated Testing" Mean? 
It is well known how the process of unit and integration testing manually is quite expensive and 
frustrating; consequently every tool that's being sold into forex trading will trumpet "Automated 
Testing" for their benefit. But precisely what is "automated testing"? Automation means different 
things to different people. To many engineers the promise of "automated testing" implies that they 
can press a control button and they are going to either have a "green check" indicating that their 
code is correct, or possibly a "red x" indicating failure. 
Unfortunately this tool won't exist. More importantly, if this tool did exist, would you want to work
with it? Think about it. What would it mean for any tool to inform you your code is "Ok"? Would it 
mean how the code is formatted nicely? Maybe. Would it imply that it conforms for a coding 
standards? Maybe. Would it mean that your particular code is correct? Emphatically No! 
Completely automated exams are not attainable nor could it be desirable. Automation should 
address those areas of the testing process that are algorithmic anyway and labor intensive. This 
frees the application engineer to perform higher value testing work like designing better and much 
more complete tests. 
The logical question to be asked when looking for tools is: "How much automation does this tool 
provide?" This is the large gray area and also the primary section of uncertainty when a company 
attempts to calculate an ROI for tool investment. 
Anatomy of Test Tools 
Test Tools generally give you a variety of functionality. The names vendors use will be different for 
different tools, and a few functionality could be missing from some tools. For a common frame of 
reference, we have chosen these names to the "modules" that might exist in the test tools you're 
evaluating: 
Parser: The parser module allows the tool to be aware of your code. It reads the code, and fosters an 
intermediate representation for your code (usually in the tree structure). Basically the comparable to 
the compiler does. The output, or "parse data" is normally saved in an intermediate language (IL) 
file. 
CodeGen: The code generator module uses the "parse data" to construct the test harness source 
code. 
Test Harness: While the test harness just isn't specifically section of the tool; the decisions made in 
quality harness architecture affect all the other features with the tool. So the harness architecture is 
extremely important when looking at a tool. 
Compiler: The compiler module allows quality tool to invoke the compiler to compile and link test 
harness components. 
Target: The target module allows tests to be easily run in a number of runtime environments 
including support for emulators, simulators, embedded debuggers, and commercial RTOS. 
Test Editor: The test editor allows the user to use the scripting language or a sophisticated graphical 
user interface (GUI) to setup preconditions and expected values (pass/fail criteria) for test cases. 
Coverage: The coverage module allows the user to get reports on what this elements of the code are 
executed by each test. 
Reporting: The reporting module allows the different captured data to get compiled into project 
documentation. 
CLI: A command line interface (CLI) allows further automation with the use from the tool, allowing 
the tool to be invoked from scripts, make, etc. 
Regression: The regression module allows tests which might be created against one version of the
application to be re-run against new versions. 
Integrations: Integrations with third-party tools is an interesting strategy to leverage your 
investment in a very test tool. Common integrations are with configuration management, 
requirements management tools, and static analysis tools. 
Later sections will elaborate on how you should evaluate these 
http://www.accenture.com/us-en/Pages/service-application-testing-overview.aspx modules with your 
candidate tools. 
Classes of Test Tools / Levels of Automation 
Since all tools tend not to include all functionality or modules described above as well as because 
there is an extensive difference between tools within the level of automation provided, we have 
created the following broad classes of test tools. Candidate test tools will fall into one of such 
categories. 
"Manual" tools generally create jail framework for the test harness, and require you to hand-code 
quality data and logic forced to implement the test cases. Often, they will provide a scripting 
language and/or possibly a set of library functions that might be used to complete common items like 
test assertions or create formatted reports for test documentation. 
"Semi-Automated" tools may put a graphical interface on some Automated functionality supplied by 
a "manual" tool, and often will still require hand-coding and/or scripting in-order to evaluate more 
complex constructs. Additionally, a "semi-automated" tool might be missing many of the modules 
make fish an "automated" tool has. Built in support for target deployment by way of example. 
"Automated" tools will address each with the functional areas or modules listed inside the previous 
section. Tools with this class won't require manual hand coding and definately will support all 
language constructs also a various target deployments. 
Subtle Tool Differences 
In addition to comparing tool features and automation levels, additionally it is important to evaluate 
and compare quality approach used. This may hide latent defects inside the tool, so it is important to 
not just load your code in the tool, but to also attempt to build some simple test cases for every 
method in the class that you're testing. Does the tool develop a complete test harness? Are all stubs 
created automatically? Can you make use of the GUI to define parameters and global data for that 
test cases or are you required to write code while you would had you been testing manually? 
In a similar way target support differs a lot between tools. Be wary if your vendor says: "We support 
all compilers and targets out of the box". These are code words for: "You do all the work to generate 
our tool work with your environment". 
How to Evaluate Test Tools 
The following few sections will describe, in greater detail, information that you ought to investigate 
in the evaluation of the software testing tool. Ideally you need to confirm these records with hands-on 
testing of every tool being considered. 
Since the entire content of this paper is pretty technical, we would like to explain a number of the
conventions used. For each section, we've got a title that describes an issue to be considered, some 
of why the problem is important, along with a "Key Points" section in summary concrete items being 
considered. 
Also, while we are discussing conventions, we have to also make note of terminology. The term 
"function" refers to sometimes a C function or a C++ class method, "unit" describes a C file or 
possibly a C++ class. Finally, please remember, nearly all tool can somehow secure the items 
mentioned inside "Key Points" sections, your career is to evaluate how automated, easy to work with, 
and handle the support is. 
Parser and Code Generator 
It is fairly easy to develop a parser for C; however it is incredibly difficult to build a complete parser 
for C++. One in the questions to become answered during tool evaluation ought to be: "How robust 
and mature will be the parser technology"? Some tool vendors use commercial parser technology 
that they license from parser technology companies and a few have homegrown parsers that they 
have built themselves. The robustness in the parser and code generator might be verified by 
evaluating the tool with complex code constructs that are representative from the code to be used 
for the project. 
Key Points: 
- Is the parser technology commercial or homegrown? 
- What languages are supported? 
- Are tool versions for C and C++ a similar tool or different? 
- Is your entire C++ language implemented, or are their restrictions? 
- Does the tool help our most complicated code? 
The Test Driver 
The Test Driver could be the "main program" that controls quality. Here is really a simple example of 
your driver that can test the sine function from the standard C library: 
#include 
#include 
int main () 
float local; 
local = sin (90.0); 
if (local == 1.0) printf ("My Test Passed!n"); 
else printf ("My Test Failed!n");
return 0; 
Although it is a pretty simple example, a "manual" tool might require you to type (and debug) this 
little snippet of code manually, a "semi-automated" tool might offer you some sort of scripting 
language or simple GUI to get in the stimulus value for sine. An "automated" tool would've a full-featured 
GUI for building test cases, integrated code coverage analysis, a debugger, and an internal 
target deployment. 
I wonder in case you noticed that this driver carries a bug. The bug is the sin function actually uses 
radians not degrees for that input angle. 
Key Points 
- Is the driver automatically generated or do I write the code? 
- Can I test the following without writing any code: 
- Testing over the range of values 
- Combinatorial Testing 
- Data Partition Testing (Equivalence Sets) 
- Lists of input values 
- Lists of expected values 
- Exceptions not surprisingly values 
- Signal handling 
- Can I set up a sequence of calls to several methods inside the same test? 
Stubbing Dependent Functions 
Building replacements for dependent functions is essential when you want to manipulate the values 
which a dependent function returns throughout a test. Stubbing can be a really important portion of 
integration and unit testing, because it allows one to isolate the code under test from other parts of 
your application, plus much more easily stimulate the execution from the unit or sub-system of 
interest. 
Many tools require manual generation of the test code to make a stub a single thing more than 
return a static scalar value (return 0;) 
Key Points 
- Arestubs automatically generated, or does one write code for them? 
- Are complex outputs supported automatically (structures, classes)? 
- Can each call in the stub return another value?
- Does the stub keep track of how many times it absolutely was called? 
- Does the stub keep track from the input parameters over multiple calls? 
- Can you stub calls towards the standard C library functions like malloc? 
Test Data 
There are two basic approaches that "semi-automated" and "automated" tools use to implement test 
cases. One is often a "data-driven" architecture, and also the other is really a "single-test" 
architecture. 
For a data-driven architecture, the exam harness is made for all in the units under test and supports 
all in the functions defined in those units. When the test is to get run, the tool simply provides the 
stimulus data across a data stream including a file handle or possibly a physical interface just like a 
UART. 
For a "single-test" architecture, each time a test runs, the tool will build the test driver to the test, 
and compile and link it into an executable. A couple of points on this; first, every one of the extra 
code generation required from the single-test method, and compiling and linking will require more 
time at test execution time; second, you get building a separate test harness for every test case. 
This implies that a candidate tool might appear to dedicate yourself some nominal cases but may 
well not work correctly for more complicated tests. 
Key Points 
- Is the test harness data driven? 
- How long does it take to start a test case (including any code generation and compiling time)? 
- Can the test cases be edited outside of the exam tool IDE? 
- If not, have I done enough free play with the tool with complex code examples to comprehend any 
limitations? 
Automated Generation of Test Data 
Some "automated" tools give you a degree of automated test case creation. Different approaches are 
used to do this. The following paragraphs describe many of these approaches: 
Min-Mid-Max (MMM) Test Cases tests will stress a function with the bounds in the input data types. 
C and C++ code often won't protect itself against out-of-bound inputs. The engineer has some 
functional range in their mind and they often don't protect themselves against from range inputs. 
Equivalence Classes (EC) tests create "partitions" for every data type and select a sample of values 
from each partition. The assumption is always that values from the same partition will stimulate the 
application in the similar way. 
Random Values (RV) tests will set combinations of random values for each from the parameters of a 
function.
Basic Paths (BP) tests utilize the basis path analysis to check the unique paths available through a 
procedure. BP tests can automatically create a high level of branch coverage. 
The key thing to keep in mind when thinking of automatic test case construction will be the purpose 
that it serves. Automated tests are ideal for testing the robustness from the application code, but not 
the correctness. For correctness, you have to create tests which can be based on the the application 
is supposed to complete, not exactly what it does do. 
Compiler Integration 
The point from the compiler integration is two-fold. One point is to allow test harness components to 
get compiled and linked automatically, without the user having to figure out the compiler options 
needed. The other point is to allow test tool to honor any language extensions which can be unique 
towards the compiler being used. Especially with cross-compilers, it is quite common for your 
compiler to supply extensions which are not area of the C/C++ language standards. Some tools use 
the approach of #defining these extension to null strings. This very crude approach is very bad since 
it changes the object code the compiler produces. For example, consider the subsequent global 
extern having a GCC attribute: 
extern int MyGlobal __attribute__ ((aligned (16))); 
If your candidate tool doesn't maintain the attribute when 
defining the worldwide object MyGlobal, then code will 
behave differently during testing laptop or computer will 
when deployed as the memory will not likely be aligned 
exactly the same. 
Key Points 
- Does the tool automatically compile and link quality harness? 
- Does the tool honor and implement compiler-specific language extension? 
- What form of interface is there on the compiler (IDE, CLI, etc.)? 
- Does the tool have an interface to import project settings out of your development environment, or 
must they be manually imported? 
- If the tool does import project settings, is that this import feature general purpose or restricted to 
specific compiler, or compiler families? 
- Is the tool integrated together with your debugger to allow one to debug tests? 
Support for Testing while on an Embedded Target 
In this section we will use the term "Tool Chain" to refer to the total cross development environment 
including the cross-compiler, debug interface (emulator), target board, and Real-Time Operating 
System (RTOS). It is imperative that you consider in the event the candidate tools have robust target 
integrations to your tool chain, and to understand what in the tool has to change in the event you
migrate to an alternative tool chain. 
Additionally, it is important to be aware of the automation level and robustness of the target 
integration. As mentioned earlier: If a vendor says: "we support all compilers and many types of 
targets out from the box." They mean: "You do all of the work to generate our tool work with your 
environment." 
Ideally, the tool that you select enables "push button" test execution where all from the complexity 
of downloading to the target and capturing test results back to the host is abstracted into the "Test 
Execution" feature to ensure no special user actions are expected. 
An additional complication with embedded target tests are hardware availability. Often, the 
hardware is being developed in parallel with the program, or there is certainly limited hardware 
availability. A key feature may be the ability to start testing in the native environment and later on 
transition to the actual hardware. Ideally, the tool artifacts are hardware independent. 
Key Points 
- Is my tool chain supported? If not, could it be supported? What does "supported" mean? 
- Can I build tests on a host system and later use them for target testing? 
- How does the exam harness get downloaded on the target? 
- How are the test results captured back on the host? 
- What targets, cross compilers, and RTOS are supported off-the-shelf? 
- Who builds the support to get a new tool chain? 
- Is any part of the tool chain integration user configurable? 
Test Case Editor 
Obviously, the exam case editor is to try and will spend much of your interactive time utilizing a test 
tool. If there exists true automation with the previous items mentioned in this paper, then the 
amount of time owing to setting up quality environment, and also the target connection must be 
minimal. Remember that which you said at the start, you require to use the engineer's time for it to 
design better plus more complete tests. 
The key element to evaluate is how hard is it to setup test input and expected values for non-trivial 
constructs. All tools with this market provide some easy way to setup scalar values. For example, 
does your candidate tool give a simple and intuitive way to make a class? How about an abstract way 
to create an STL container; as being a vector or possibly a map? These would be the things to gauge 
in the exam case editor. 
As with most of this paper there is "support" and then there's "automated support". Take this into 
account when evaluating constructs that might be of interest to you.
Key Points 
- Are allowed ranges for scalar values shown 
- Are array sizes shown? 
- Is it all to easy to set Min and Max values with tags as opposed to values? This is vital that you 
maintain the integrity of test if a type changes. 
- Are special floating point numbers supported (e.g. NaN, +/- Infinity) 
- Can you do combinatorial tests (vary 5 parameters on the range and possess the tool do all 
combinations of the values)? 
- Is the editor "base aware" so that you just can easily enter values in alternate bases like hex, octal, 
and binary? 
- For expected results, could you easily enter absolute tolerances (e.g. +/- 0.05) and relative 
tolerances (e.g. +/- 1%) for floating point values? 
- Can test data be easily imported business sources like Excel? 
Code Coverage 
Most "semi-automated" tools and all sorts of "automated" tools possess some code coverage facility 
internal that allows one to see metrics which show the portion of the application that is executed 
through your test cases. Some tools present this information in table form. Some show flow graphs, 
and a few show annotated source listings. While tables are perfect as a summary, if you happen to 
be trying to achieve 100% code coverage, an annotated source listing may be the best. Such a listing 
will show the original source code file with colorations for covered, partially covered, and uncovered 
constructs. This allows one to easily start to see the additional test cases which are needed to arrive 
at 100% coverage. 
It is important to be aware of the impact of instrumentation a further instrumentation on the job. 
There are two considerations: one may be the increase in size in the object code, and also the other 
will be the run-time overhead. It is important to comprehend if your application is memory or real-time 
limited (or both). This will help you give attention to which item is most important for the 
application. 
Key Points 
-What is the code size increase for every type of instrumentation? 
- What could be the run-time increase for each and every type of instrumentation? 
- Can instrumentation be built-into your "make" or "build" system? 
- How would be the coverage results presented to an individual? Are there annotated listings having 
a graphical coverage browser, or simply tables of metrics?
- How will be the coverage information retrieved from the target? Is the process flexible? Can data 
be buffered in RAM? 
- Are statement, branch (or decision) and MC/DC coverage supported? 
- Can multiple coverage types be captured in one execution? 
- Can coverage data be shared across multiple test environments (e.g. can some coverage be 
captured during system testing and stay combined with the coverage from unit and integration 
testing)? 
- Can you step through test execution using a policy data to understand the flow of control through 
your application without utilizing a debugger? 
- Can you obtain aggregate coverage for all test runs in one particular report? 
- Can the tool be qualified for DO-178B as well as for Medical Device intended use? 
Regression Testing 
There ought to be two basic goals for adopting the test tool. The primary goal is usually to save time 
testing. If you've read this far, we imagine that you simply agree with that! The secondary goal is 
usually to allow the created tests to get leveraged on the life cycle of the application. This means 
that the time and money dedicated to building tests should bring about tests which can be re-usable 
as the approval changes over time and easy to configuration manage. The major thing to evaluate 
inside your candidate tool 's what specific things need to be "saved" to be able to run the identical 
tests within the future and how the re-running of tests is controlled. 
Key Points 
> What file or files need to become configuration managed to regression test? 
> Does the tool have a complete and documented Command Line Interface (CLI)? 
> Are these files plain text or binary? This affects your ability to utilize a diff utility to guage 
changes with time. 
> Do the harness files generated from the tool have to be configuration managed? 
> Is there integration with configuration management tools? 
> Create a test to get a unit, now alter the name of a parameter, and re-create your test 
environment. How long can this take? Is it complicated? 
> Does the tool support database technology and statistical graphs to allow for trend analysis of test 
execution and code coverage after a while? 
> Can you test multiple baselines of code with exactly the same set of test cases automatically? 
> Is distributed testing supported to allow for portions of the tests to be run on different physical 
machines to speed up testing?
Reporting 
Most tools will give you similar reporting. Minimally, they must create an easy to be aware of report 
showing the inputs, expected outputs, actual outputs plus a comparison in the expected and actual 
values. 
Key Points 
> What output formats are supported? HTML? Text? CSV? XML? 
> Is it simple to get both a advanced level (project-wide) report as well as a detailed report for an 
individual function? 
> Is the report content user configurable? 
> Is the report format user configurable? 
Integration with Other Tools 
Regardless of the quality or usefulness of the particular tool, all tools should operate in the multi-vendor 
environment. A lot test of time money has been spent by big companies buying little 
companies with an idea of offering "the tool" that is going to do everything for everyone. The 
interesting thing is the fact that most often with your mega tool suites, the whole can be a lot less 
than the sum from the parts. It seems that companies often take 4-5 pretty cool small tools and 
integrate them into one bulky and unusable tool. 
Key Points 
> Which tools does your candidate tool integrate with out-of-the-box, and will the end-user add 
integrations? 
Additional Desirable Features for the Testing Tool 
The previous sections all describe functionality that should be in any tool which is considered a 
computerized test tool. In the subsequent few sections we'll list some desirable features, along 
which has a rationale for your importance in the feature. These features could have varying degrees 
of applicability in your particular project. 
True Integration Testing / Multiple Units Under Test 
Integration exams are an extension of unit testing. It is employed to check interfaces between units 
and requires you to combine units define some functional process. Many tools claim they can 
support integration testing by linking the item code legitimate units with the exam harness. This 
method builds multiple files within quality harness executable but provides no ability to stimulate 
the functions within these additional units. Ideally, you'd be able to stimulate any function within any 
unit, in a order within an individual test case. Testing the interfaces between units will usually 
uncover lots of hidden assumptions and bugs in the application. In fact, integration testing could 
possibly be a good starting point for those projects which may have no reputation unit testing. 
Key Points
> Can I include multiple units in quality environment? 
> Can I create complex test scenarios of http://www.testingsoft.com/ these classes where we 
stimulate a sequence of functions across multiple units within one test case? 
> Can I capture code coverage metrics for multiple units? 
Dynamic Stubbing 
Dynamic stubbing signifies that you can turn individual function stubs off and on dynamically. This 
allows that you create an evaluation for a single function with all the functions stubbed (even if they 
happens to the same unit because the function under test). For very complicated code, this is the 
great feature also it makes testing much simpler to implement. 
Key Points 
> Can stubs be chosen with the function level, or exactly the unit level? 
> Can function stubs be turned while on an off per test case? 
> Are the function stubs automatically generated (see items in previous section)? 
Library and Application Level Thread Testing (System Testing) 
One from the challenges of system testing is that this test stimulus provided towards the fully 
integrated application may necessitate a user pushing buttons, flipping switches, or typing at a 
console. If the approval is embedded the inputs could be even more complicated to manage. Suppose 
you could stimulate your fully integrated application at the function level, similar to how integration 
testing is done. This would allow that you build complex test scenarios that rely only for the API with 
the application. 
Some from the more modern tools allow you to test this way. An additional benefit of the mode of 
testing is you do not need the source code to test the application. You simply need the definition in 
the API (usually the header files). This methodology allows testers an automated and scriptable way 
to perform system testing. 
Agile Testing and Test Driven Development (TDD) 
Test Driven Development promises to bring testing in the development process earlier than ever 
before. Instead of writing application code first and after that your unit tests as an afterthought, you 
make your tests before the job code. This is really a popular new method of development and 
enforces the test first and test often approach. Your automated tool should support this method of 
testing in case you plan to make use of an Agile Development methodology. 
Bi-directional Integration with Requirements Tools 
If you worry about associating requirements with test cases, it's desirable for any test tool to 
integrate having a requirements management tool. If you happen to be interested in this feature, it 
is important that this interface be bi-directional, to ensure that when requirements are tagged to 
check cases, the exam case information for example test name and pass / fail status may be pushed 
back in your requirements database. This will enable you to obtain a sense of the completeness of
your needs testing. 
Tool Qualification 
If you're operating inside a regulated environment for example commercial aviation or Class III 
medical devices then you might be obligated to "qualify" the development tools accustomed to build 
and test your application. 
The qualification involves documenting what the tool is supposed to accomplish and tests that prove 
the tool operates in accordance with those requirements. Ideally a vendor will have these materials 
off-the-shelf as well as a history of customers that have used the qualification data for the industry. 
Key Points 
> Does the tool vendor offer qualification materials that are produced for your exact target 
environment and tool chain? 
> What projects have proven to work these materials? 
> How would be the materials licensed? 
> How are the materials customized and approved for a particular project? 
> If this is definitely an FAA project hold the qualification materials been successfully accustomed to 
certify to DO-178B Level A? 
> If it is surely an FDA project, have the tools been qualified for "intended use"? 
Conclusion 
Hopefully this paper provides useful information that helps that you navigate the offerings of test 
tool vendors. The relative need for http://www.softwaretestingsoftware.com/ each with the items 
raised will be different for different projects. Our final suggestions are: 
> Evaluate the candidate tools on code that's representative of the complexity with the code within 
your application 
> Evaluate the candidate tools with exactly the same tool chain which will be used on your project 
> Talk to long-term customers in the vendor and get them many of the questions raised on this 
paper 
> Ask regarding the tool technical support team. Try them out by submitting some questions 
straight to their support (rather than to their salesman) 
Finally, understand that most every tool can somehow offer the items mentioned within the "Key 
Points" sections. Your job is to evaluate how automated, easy to make use of, and finish the support 
is. 
About Vector Software
Vector Software, Inc., may be the leading independent provider of automated software testing tools 
for developers of safety critical embedded applications. Vector Software's VectorCAST line of 
products, automate and manage the complex http://www.exambuilder.com/ tasks connected with 
unit, integration, and system level testing. VectorCAST products support the C, C++, and Ada 
programming languages.

More Related Content

What's hot

YuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14h
YuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14hYuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14h
YuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14hYury M
 
Testing Options in Java
Testing Options in JavaTesting Options in Java
Testing Options in JavaMichael Fons
 
PVS-Studio advertisement - static analysis of C/C++ code
PVS-Studio advertisement - static analysis of C/C++ codePVS-Studio advertisement - static analysis of C/C++ code
PVS-Studio advertisement - static analysis of C/C++ codePVS-Studio
 
How we test the code analyzer
How we test the code analyzerHow we test the code analyzer
How we test the code analyzerPVS-Studio
 
Practical Software Testing Tools
Practical Software Testing ToolsPractical Software Testing Tools
Practical Software Testing ToolsDr Ganesh Iyer
 
Automation testing material by Durgasoft,hyderabad
Automation testing material by Durgasoft,hyderabadAutomation testing material by Durgasoft,hyderabad
Automation testing material by Durgasoft,hyderabadDurga Prasad
 
Software-automation-testing-course-navi-mumbai-software-automation-testing-co...
Software-automation-testing-course-navi-mumbai-software-automation-testing-co...Software-automation-testing-course-navi-mumbai-software-automation-testing-co...
Software-automation-testing-course-navi-mumbai-software-automation-testing-co...VibrantGroup
 
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...IRJET Journal
 
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.Wolfgang Grieskamp
 
30 testing interview questions for experienced
30 testing interview questions for experienced30 testing interview questions for experienced
30 testing interview questions for experienceddilipambhore
 
Software coding & testing, software engineering
Software coding & testing, software engineeringSoftware coding & testing, software engineering
Software coding & testing, software engineeringRupesh Vaishnav
 
Manual software-testing-interview-questions-with-answers
Manual software-testing-interview-questions-with-answersManual software-testing-interview-questions-with-answers
Manual software-testing-interview-questions-with-answersSachin Gupta
 

What's hot (18)

YuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14h
YuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14hYuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14h
YuryMakedonov_GUI_TestAutomation_QAI_Canada_2007_14h
 
Testing Options in Java
Testing Options in JavaTesting Options in Java
Testing Options in Java
 
PVS-Studio advertisement - static analysis of C/C++ code
PVS-Studio advertisement - static analysis of C/C++ codePVS-Studio advertisement - static analysis of C/C++ code
PVS-Studio advertisement - static analysis of C/C++ code
 
How we test the code analyzer
How we test the code analyzerHow we test the code analyzer
How we test the code analyzer
 
summary
summarysummary
summary
 
Software testing
Software testingSoftware testing
Software testing
 
Practical Software Testing Tools
Practical Software Testing ToolsPractical Software Testing Tools
Practical Software Testing Tools
 
Automation testing material by Durgasoft,hyderabad
Automation testing material by Durgasoft,hyderabadAutomation testing material by Durgasoft,hyderabad
Automation testing material by Durgasoft,hyderabad
 
Front Cover:
Front Cover:Front Cover:
Front Cover:
 
Software-automation-testing-course-navi-mumbai-software-automation-testing-co...
Software-automation-testing-course-navi-mumbai-software-automation-testing-co...Software-automation-testing-course-navi-mumbai-software-automation-testing-co...
Software-automation-testing-course-navi-mumbai-software-automation-testing-co...
 
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
 
Web testing
Web testingWeb testing
Web testing
 
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
Model-Based Testing: Theory and Practice. Keynote @ MoTiP (ISSRE) 2012.
 
Test Complete
Test CompleteTest Complete
Test Complete
 
Manual Testing.
Manual Testing.Manual Testing.
Manual Testing.
 
30 testing interview questions for experienced
30 testing interview questions for experienced30 testing interview questions for experienced
30 testing interview questions for experienced
 
Software coding & testing, software engineering
Software coding & testing, software engineeringSoftware coding & testing, software engineering
Software coding & testing, software engineering
 
Manual software-testing-interview-questions-with-answers
Manual software-testing-interview-questions-with-answersManual software-testing-interview-questions-with-answers
Manual software-testing-interview-questions-with-answers
 

Viewers also liked

(381877808) 102054282 5-listing-program
(381877808) 102054282 5-listing-program(381877808) 102054282 5-listing-program
(381877808) 102054282 5-listing-programDimz I
 
інфографіка
інфографікаінфографіка
інфографікаNatali Sumina
 
бокс для белых воротничков
бокс для белых воротничковбокс для белых воротничков
бокс для белых воротничковbarmatey
 
The meaning of education technology ppt
The meaning of education technology pptThe meaning of education technology ppt
The meaning of education technology pptAiza11
 
Bauer Security Ltd. Overview
Bauer Security Ltd. OverviewBauer Security Ltd. Overview
Bauer Security Ltd. OverviewBauer Security
 
The meaning of education technology ppt
The meaning of education technology pptThe meaning of education technology ppt
The meaning of education technology pptAiza11
 
Five factors affecting language learning strategies
Five factors affecting language learning strategiesFive factors affecting language learning strategies
Five factors affecting language learning strategiesSyafiqaShukor
 
Civil Disobedience by Hannah Arendt
Civil Disobedience by Hannah ArendtCivil Disobedience by Hannah Arendt
Civil Disobedience by Hannah Arendtnitilainho
 
The Meaning of Education Technology PPT
The Meaning of Education Technology PPTThe Meaning of Education Technology PPT
The Meaning of Education Technology PPTAiza11
 

Viewers also liked (12)

(381877808) 102054282 5-listing-program
(381877808) 102054282 5-listing-program(381877808) 102054282 5-listing-program
(381877808) 102054282 5-listing-program
 
інфографіка
інфографікаінфографіка
інфографіка
 
бокс для белых воротничков
бокс для белых воротничковбокс для белых воротничков
бокс для белых воротничков
 
The meaning of education technology ppt
The meaning of education technology pptThe meaning of education technology ppt
The meaning of education technology ppt
 
Bauer Security Ltd. Overview
Bauer Security Ltd. OverviewBauer Security Ltd. Overview
Bauer Security Ltd. Overview
 
Bab 4
Bab 4Bab 4
Bab 4
 
The meaning of education technology ppt
The meaning of education technology pptThe meaning of education technology ppt
The meaning of education technology ppt
 
Bab 3
Bab 3Bab 3
Bab 3
 
Bab 1
Bab 1Bab 1
Bab 1
 
Five factors affecting language learning strategies
Five factors affecting language learning strategiesFive factors affecting language learning strategies
Five factors affecting language learning strategies
 
Civil Disobedience by Hannah Arendt
Civil Disobedience by Hannah ArendtCivil Disobedience by Hannah Arendt
Civil Disobedience by Hannah Arendt
 
The Meaning of Education Technology PPT
The Meaning of Education Technology PPTThe Meaning of Education Technology PPT
The Meaning of Education Technology PPT
 

Similar to test

Testing fundamentals
Testing fundamentalsTesting fundamentals
Testing fundamentalsAbdul Basit
 
Test automation: Are Enterprises ready to bite the bullet?
Test automation: Are Enterprises ready to bite the bullet?Test automation: Are Enterprises ready to bite the bullet?
Test automation: Are Enterprises ready to bite the bullet?Aspire Systems
 
Getting Started With QA Automation
Getting Started With QA AutomationGetting Started With QA Automation
Getting Started With QA AutomationGiovanni Scerra ☃
 
An ideal static analyzer, or why ideals are unachievable
An ideal static analyzer, or why ideals are unachievableAn ideal static analyzer, or why ideals are unachievable
An ideal static analyzer, or why ideals are unachievablePVS-Studio
 
0136 ideal static_analyzer
0136 ideal static_analyzer0136 ideal static_analyzer
0136 ideal static_analyzerPVS-Studio
 
Different Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingDifferent Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingRachel Davis
 
Learn software testing with tech partnerz 3
Learn software testing with tech partnerz 3Learn software testing with tech partnerz 3
Learn software testing with tech partnerz 3Techpartnerz
 
Getting started with_testcomplete
Getting started with_testcompleteGetting started with_testcomplete
Getting started with_testcompleteankit.das
 
Software design.edited (1)
Software design.edited (1)Software design.edited (1)
Software design.edited (1)FarjanaAhmed3
 
Susan windsor soft test 16th november 2005
Susan windsor soft test   16th november 2005Susan windsor soft test   16th november 2005
Susan windsor soft test 16th november 2005David O'Dowd
 
Top 5 Code Coverage Tools in DevOps
Top 5 Code Coverage Tools in DevOpsTop 5 Code Coverage Tools in DevOps
Top 5 Code Coverage Tools in DevOpsscmGalaxy Inc
 
A Complete Guide to Codeless Testing.pdf
A Complete Guide to Codeless Testing.pdfA Complete Guide to Codeless Testing.pdf
A Complete Guide to Codeless Testing.pdfpCloudy
 
Automation testing
Automation testingAutomation testing
Automation testingArta Doci
 
Test Automation Tool comparison – HP UFT/QTP vs. Selenium
Test Automation Tool comparison –  HP UFT/QTP vs. SeleniumTest Automation Tool comparison –  HP UFT/QTP vs. Selenium
Test Automation Tool comparison – HP UFT/QTP vs. SeleniumAspire Systems
 
Getting started with test complete 7
Getting started with test complete 7Getting started with test complete 7
Getting started with test complete 7Hoamuoigio Hoa
 
Where to learn best software course
Where to learn best software courseWhere to learn best software course
Where to learn best software coursewebskittersacademy
 
Evolution of Test Automation
Evolution of Test AutomationEvolution of Test Automation
Evolution of Test AutomationDharmik Rajput
 

Similar to test (20)

Testing fundamentals
Testing fundamentalsTesting fundamentals
Testing fundamentals
 
Test automation: Are Enterprises ready to bite the bullet?
Test automation: Are Enterprises ready to bite the bullet?Test automation: Are Enterprises ready to bite the bullet?
Test automation: Are Enterprises ready to bite the bullet?
 
Getting Started With QA Automation
Getting Started With QA AutomationGetting Started With QA Automation
Getting Started With QA Automation
 
Unit iv
Unit ivUnit iv
Unit iv
 
An ideal static analyzer, or why ideals are unachievable
An ideal static analyzer, or why ideals are unachievableAn ideal static analyzer, or why ideals are unachievable
An ideal static analyzer, or why ideals are unachievable
 
0136 ideal static_analyzer
0136 ideal static_analyzer0136 ideal static_analyzer
0136 ideal static_analyzer
 
Different Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingDifferent Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application Testing
 
Sd Revision
Sd RevisionSd Revision
Sd Revision
 
Learn software testing with tech partnerz 3
Learn software testing with tech partnerz 3Learn software testing with tech partnerz 3
Learn software testing with tech partnerz 3
 
Getting started with_testcomplete
Getting started with_testcompleteGetting started with_testcomplete
Getting started with_testcomplete
 
Software design.edited (1)
Software design.edited (1)Software design.edited (1)
Software design.edited (1)
 
Susan windsor soft test 16th november 2005
Susan windsor soft test   16th november 2005Susan windsor soft test   16th november 2005
Susan windsor soft test 16th november 2005
 
Top 5 Code Coverage Tools in DevOps
Top 5 Code Coverage Tools in DevOpsTop 5 Code Coverage Tools in DevOps
Top 5 Code Coverage Tools in DevOps
 
A Complete Guide to Codeless Testing.pdf
A Complete Guide to Codeless Testing.pdfA Complete Guide to Codeless Testing.pdf
A Complete Guide to Codeless Testing.pdf
 
Automation testing
Automation testingAutomation testing
Automation testing
 
Test Automation Tool comparison – HP UFT/QTP vs. Selenium
Test Automation Tool comparison –  HP UFT/QTP vs. SeleniumTest Automation Tool comparison –  HP UFT/QTP vs. Selenium
Test Automation Tool comparison – HP UFT/QTP vs. Selenium
 
Getting started with test complete 7
Getting started with test complete 7Getting started with test complete 7
Getting started with test complete 7
 
Mule testing
Mule testingMule testing
Mule testing
 
Where to learn best software course
Where to learn best software courseWhere to learn best software course
Where to learn best software course
 
Evolution of Test Automation
Evolution of Test AutomationEvolution of Test Automation
Evolution of Test Automation
 

test

  • 1. test You Can't Evaluate a Test Tool by Reading a Data Sheet All data sheets look virtually alike. The buzzwords is the same: "Industry Leader", "Unique Technology", "Automated Testing", and "Advanced Techniques". The screen shots offer a similar experience: "Bar Charts", "Flow Charts", "HTML reports" and "Status percentages". It is mind numbing. What is Software Testing? All people who have done software testing recognize that testing comes in many flavors. For simplicity, we'll use three terms on this paper: System Testing Integration Testing Unit Testing Everyone does some quantity of system testing where they actually do some with the same things from it that the users will do by it. Notice that we said "some" rather than "all." One with the most common factors behind applications being fielded with bugs is the fact that unexpected, and for that reason untested, combinations of inputs are encountered through the application much more the field. Not as numerous folks do integration testing, as well as fewer do unit testing. If you have done integration or unit testing, you may be painfully aware of the volume of test code that has got to be generated to isolate just one file or gang of files in the rest in the application. At the most stringent levels of testing, it is not uncommon for the quantity of test code written being larger than the level of application code being tested. As a result, these amounts of testing are likely to be applied to mission and safety critical applications in markets for example aviation, medical device, and railway. What Does "Automated Testing" Mean? It is well known how the process of unit and integration testing manually is quite expensive and frustrating; consequently every tool that's being sold into forex trading will trumpet "Automated Testing" for their benefit. But precisely what is "automated testing"? Automation means different things to different people. To many engineers the promise of "automated testing" implies that they can press a control button and they are going to either have a "green check" indicating that their code is correct, or possibly a "red x" indicating failure. Unfortunately this tool won't exist. More importantly, if this tool did exist, would you want to work
  • 2. with it? Think about it. What would it mean for any tool to inform you your code is "Ok"? Would it mean how the code is formatted nicely? Maybe. Would it imply that it conforms for a coding standards? Maybe. Would it mean that your particular code is correct? Emphatically No! Completely automated exams are not attainable nor could it be desirable. Automation should address those areas of the testing process that are algorithmic anyway and labor intensive. This frees the application engineer to perform higher value testing work like designing better and much more complete tests. The logical question to be asked when looking for tools is: "How much automation does this tool provide?" This is the large gray area and also the primary section of uncertainty when a company attempts to calculate an ROI for tool investment. Anatomy of Test Tools Test Tools generally give you a variety of functionality. The names vendors use will be different for different tools, and a few functionality could be missing from some tools. For a common frame of reference, we have chosen these names to the "modules" that might exist in the test tools you're evaluating: Parser: The parser module allows the tool to be aware of your code. It reads the code, and fosters an intermediate representation for your code (usually in the tree structure). Basically the comparable to the compiler does. The output, or "parse data" is normally saved in an intermediate language (IL) file. CodeGen: The code generator module uses the "parse data" to construct the test harness source code. Test Harness: While the test harness just isn't specifically section of the tool; the decisions made in quality harness architecture affect all the other features with the tool. So the harness architecture is extremely important when looking at a tool. Compiler: The compiler module allows quality tool to invoke the compiler to compile and link test harness components. Target: The target module allows tests to be easily run in a number of runtime environments including support for emulators, simulators, embedded debuggers, and commercial RTOS. Test Editor: The test editor allows the user to use the scripting language or a sophisticated graphical user interface (GUI) to setup preconditions and expected values (pass/fail criteria) for test cases. Coverage: The coverage module allows the user to get reports on what this elements of the code are executed by each test. Reporting: The reporting module allows the different captured data to get compiled into project documentation. CLI: A command line interface (CLI) allows further automation with the use from the tool, allowing the tool to be invoked from scripts, make, etc. Regression: The regression module allows tests which might be created against one version of the
  • 3. application to be re-run against new versions. Integrations: Integrations with third-party tools is an interesting strategy to leverage your investment in a very test tool. Common integrations are with configuration management, requirements management tools, and static analysis tools. Later sections will elaborate on how you should evaluate these http://www.accenture.com/us-en/Pages/service-application-testing-overview.aspx modules with your candidate tools. Classes of Test Tools / Levels of Automation Since all tools tend not to include all functionality or modules described above as well as because there is an extensive difference between tools within the level of automation provided, we have created the following broad classes of test tools. Candidate test tools will fall into one of such categories. "Manual" tools generally create jail framework for the test harness, and require you to hand-code quality data and logic forced to implement the test cases. Often, they will provide a scripting language and/or possibly a set of library functions that might be used to complete common items like test assertions or create formatted reports for test documentation. "Semi-Automated" tools may put a graphical interface on some Automated functionality supplied by a "manual" tool, and often will still require hand-coding and/or scripting in-order to evaluate more complex constructs. Additionally, a "semi-automated" tool might be missing many of the modules make fish an "automated" tool has. Built in support for target deployment by way of example. "Automated" tools will address each with the functional areas or modules listed inside the previous section. Tools with this class won't require manual hand coding and definately will support all language constructs also a various target deployments. Subtle Tool Differences In addition to comparing tool features and automation levels, additionally it is important to evaluate and compare quality approach used. This may hide latent defects inside the tool, so it is important to not just load your code in the tool, but to also attempt to build some simple test cases for every method in the class that you're testing. Does the tool develop a complete test harness? Are all stubs created automatically? Can you make use of the GUI to define parameters and global data for that test cases or are you required to write code while you would had you been testing manually? In a similar way target support differs a lot between tools. Be wary if your vendor says: "We support all compilers and targets out of the box". These are code words for: "You do all the work to generate our tool work with your environment". How to Evaluate Test Tools The following few sections will describe, in greater detail, information that you ought to investigate in the evaluation of the software testing tool. Ideally you need to confirm these records with hands-on testing of every tool being considered. Since the entire content of this paper is pretty technical, we would like to explain a number of the
  • 4. conventions used. For each section, we've got a title that describes an issue to be considered, some of why the problem is important, along with a "Key Points" section in summary concrete items being considered. Also, while we are discussing conventions, we have to also make note of terminology. The term "function" refers to sometimes a C function or a C++ class method, "unit" describes a C file or possibly a C++ class. Finally, please remember, nearly all tool can somehow secure the items mentioned inside "Key Points" sections, your career is to evaluate how automated, easy to work with, and handle the support is. Parser and Code Generator It is fairly easy to develop a parser for C; however it is incredibly difficult to build a complete parser for C++. One in the questions to become answered during tool evaluation ought to be: "How robust and mature will be the parser technology"? Some tool vendors use commercial parser technology that they license from parser technology companies and a few have homegrown parsers that they have built themselves. The robustness in the parser and code generator might be verified by evaluating the tool with complex code constructs that are representative from the code to be used for the project. Key Points: - Is the parser technology commercial or homegrown? - What languages are supported? - Are tool versions for C and C++ a similar tool or different? - Is your entire C++ language implemented, or are their restrictions? - Does the tool help our most complicated code? The Test Driver The Test Driver could be the "main program" that controls quality. Here is really a simple example of your driver that can test the sine function from the standard C library: #include #include int main () float local; local = sin (90.0); if (local == 1.0) printf ("My Test Passed!n"); else printf ("My Test Failed!n");
  • 5. return 0; Although it is a pretty simple example, a "manual" tool might require you to type (and debug) this little snippet of code manually, a "semi-automated" tool might offer you some sort of scripting language or simple GUI to get in the stimulus value for sine. An "automated" tool would've a full-featured GUI for building test cases, integrated code coverage analysis, a debugger, and an internal target deployment. I wonder in case you noticed that this driver carries a bug. The bug is the sin function actually uses radians not degrees for that input angle. Key Points - Is the driver automatically generated or do I write the code? - Can I test the following without writing any code: - Testing over the range of values - Combinatorial Testing - Data Partition Testing (Equivalence Sets) - Lists of input values - Lists of expected values - Exceptions not surprisingly values - Signal handling - Can I set up a sequence of calls to several methods inside the same test? Stubbing Dependent Functions Building replacements for dependent functions is essential when you want to manipulate the values which a dependent function returns throughout a test. Stubbing can be a really important portion of integration and unit testing, because it allows one to isolate the code under test from other parts of your application, plus much more easily stimulate the execution from the unit or sub-system of interest. Many tools require manual generation of the test code to make a stub a single thing more than return a static scalar value (return 0;) Key Points - Arestubs automatically generated, or does one write code for them? - Are complex outputs supported automatically (structures, classes)? - Can each call in the stub return another value?
  • 6. - Does the stub keep track of how many times it absolutely was called? - Does the stub keep track from the input parameters over multiple calls? - Can you stub calls towards the standard C library functions like malloc? Test Data There are two basic approaches that "semi-automated" and "automated" tools use to implement test cases. One is often a "data-driven" architecture, and also the other is really a "single-test" architecture. For a data-driven architecture, the exam harness is made for all in the units under test and supports all in the functions defined in those units. When the test is to get run, the tool simply provides the stimulus data across a data stream including a file handle or possibly a physical interface just like a UART. For a "single-test" architecture, each time a test runs, the tool will build the test driver to the test, and compile and link it into an executable. A couple of points on this; first, every one of the extra code generation required from the single-test method, and compiling and linking will require more time at test execution time; second, you get building a separate test harness for every test case. This implies that a candidate tool might appear to dedicate yourself some nominal cases but may well not work correctly for more complicated tests. Key Points - Is the test harness data driven? - How long does it take to start a test case (including any code generation and compiling time)? - Can the test cases be edited outside of the exam tool IDE? - If not, have I done enough free play with the tool with complex code examples to comprehend any limitations? Automated Generation of Test Data Some "automated" tools give you a degree of automated test case creation. Different approaches are used to do this. The following paragraphs describe many of these approaches: Min-Mid-Max (MMM) Test Cases tests will stress a function with the bounds in the input data types. C and C++ code often won't protect itself against out-of-bound inputs. The engineer has some functional range in their mind and they often don't protect themselves against from range inputs. Equivalence Classes (EC) tests create "partitions" for every data type and select a sample of values from each partition. The assumption is always that values from the same partition will stimulate the application in the similar way. Random Values (RV) tests will set combinations of random values for each from the parameters of a function.
  • 7. Basic Paths (BP) tests utilize the basis path analysis to check the unique paths available through a procedure. BP tests can automatically create a high level of branch coverage. The key thing to keep in mind when thinking of automatic test case construction will be the purpose that it serves. Automated tests are ideal for testing the robustness from the application code, but not the correctness. For correctness, you have to create tests which can be based on the the application is supposed to complete, not exactly what it does do. Compiler Integration The point from the compiler integration is two-fold. One point is to allow test harness components to get compiled and linked automatically, without the user having to figure out the compiler options needed. The other point is to allow test tool to honor any language extensions which can be unique towards the compiler being used. Especially with cross-compilers, it is quite common for your compiler to supply extensions which are not area of the C/C++ language standards. Some tools use the approach of #defining these extension to null strings. This very crude approach is very bad since it changes the object code the compiler produces. For example, consider the subsequent global extern having a GCC attribute: extern int MyGlobal __attribute__ ((aligned (16))); If your candidate tool doesn't maintain the attribute when defining the worldwide object MyGlobal, then code will behave differently during testing laptop or computer will when deployed as the memory will not likely be aligned exactly the same. Key Points - Does the tool automatically compile and link quality harness? - Does the tool honor and implement compiler-specific language extension? - What form of interface is there on the compiler (IDE, CLI, etc.)? - Does the tool have an interface to import project settings out of your development environment, or must they be manually imported? - If the tool does import project settings, is that this import feature general purpose or restricted to specific compiler, or compiler families? - Is the tool integrated together with your debugger to allow one to debug tests? Support for Testing while on an Embedded Target In this section we will use the term "Tool Chain" to refer to the total cross development environment including the cross-compiler, debug interface (emulator), target board, and Real-Time Operating System (RTOS). It is imperative that you consider in the event the candidate tools have robust target integrations to your tool chain, and to understand what in the tool has to change in the event you
  • 8. migrate to an alternative tool chain. Additionally, it is important to be aware of the automation level and robustness of the target integration. As mentioned earlier: If a vendor says: "we support all compilers and many types of targets out from the box." They mean: "You do all of the work to generate our tool work with your environment." Ideally, the tool that you select enables "push button" test execution where all from the complexity of downloading to the target and capturing test results back to the host is abstracted into the "Test Execution" feature to ensure no special user actions are expected. An additional complication with embedded target tests are hardware availability. Often, the hardware is being developed in parallel with the program, or there is certainly limited hardware availability. A key feature may be the ability to start testing in the native environment and later on transition to the actual hardware. Ideally, the tool artifacts are hardware independent. Key Points - Is my tool chain supported? If not, could it be supported? What does "supported" mean? - Can I build tests on a host system and later use them for target testing? - How does the exam harness get downloaded on the target? - How are the test results captured back on the host? - What targets, cross compilers, and RTOS are supported off-the-shelf? - Who builds the support to get a new tool chain? - Is any part of the tool chain integration user configurable? Test Case Editor Obviously, the exam case editor is to try and will spend much of your interactive time utilizing a test tool. If there exists true automation with the previous items mentioned in this paper, then the amount of time owing to setting up quality environment, and also the target connection must be minimal. Remember that which you said at the start, you require to use the engineer's time for it to design better plus more complete tests. The key element to evaluate is how hard is it to setup test input and expected values for non-trivial constructs. All tools with this market provide some easy way to setup scalar values. For example, does your candidate tool give a simple and intuitive way to make a class? How about an abstract way to create an STL container; as being a vector or possibly a map? These would be the things to gauge in the exam case editor. As with most of this paper there is "support" and then there's "automated support". Take this into account when evaluating constructs that might be of interest to you.
  • 9. Key Points - Are allowed ranges for scalar values shown - Are array sizes shown? - Is it all to easy to set Min and Max values with tags as opposed to values? This is vital that you maintain the integrity of test if a type changes. - Are special floating point numbers supported (e.g. NaN, +/- Infinity) - Can you do combinatorial tests (vary 5 parameters on the range and possess the tool do all combinations of the values)? - Is the editor "base aware" so that you just can easily enter values in alternate bases like hex, octal, and binary? - For expected results, could you easily enter absolute tolerances (e.g. +/- 0.05) and relative tolerances (e.g. +/- 1%) for floating point values? - Can test data be easily imported business sources like Excel? Code Coverage Most "semi-automated" tools and all sorts of "automated" tools possess some code coverage facility internal that allows one to see metrics which show the portion of the application that is executed through your test cases. Some tools present this information in table form. Some show flow graphs, and a few show annotated source listings. While tables are perfect as a summary, if you happen to be trying to achieve 100% code coverage, an annotated source listing may be the best. Such a listing will show the original source code file with colorations for covered, partially covered, and uncovered constructs. This allows one to easily start to see the additional test cases which are needed to arrive at 100% coverage. It is important to be aware of the impact of instrumentation a further instrumentation on the job. There are two considerations: one may be the increase in size in the object code, and also the other will be the run-time overhead. It is important to comprehend if your application is memory or real-time limited (or both). This will help you give attention to which item is most important for the application. Key Points -What is the code size increase for every type of instrumentation? - What could be the run-time increase for each and every type of instrumentation? - Can instrumentation be built-into your "make" or "build" system? - How would be the coverage results presented to an individual? Are there annotated listings having a graphical coverage browser, or simply tables of metrics?
  • 10. - How will be the coverage information retrieved from the target? Is the process flexible? Can data be buffered in RAM? - Are statement, branch (or decision) and MC/DC coverage supported? - Can multiple coverage types be captured in one execution? - Can coverage data be shared across multiple test environments (e.g. can some coverage be captured during system testing and stay combined with the coverage from unit and integration testing)? - Can you step through test execution using a policy data to understand the flow of control through your application without utilizing a debugger? - Can you obtain aggregate coverage for all test runs in one particular report? - Can the tool be qualified for DO-178B as well as for Medical Device intended use? Regression Testing There ought to be two basic goals for adopting the test tool. The primary goal is usually to save time testing. If you've read this far, we imagine that you simply agree with that! The secondary goal is usually to allow the created tests to get leveraged on the life cycle of the application. This means that the time and money dedicated to building tests should bring about tests which can be re-usable as the approval changes over time and easy to configuration manage. The major thing to evaluate inside your candidate tool 's what specific things need to be "saved" to be able to run the identical tests within the future and how the re-running of tests is controlled. Key Points > What file or files need to become configuration managed to regression test? > Does the tool have a complete and documented Command Line Interface (CLI)? > Are these files plain text or binary? This affects your ability to utilize a diff utility to guage changes with time. > Do the harness files generated from the tool have to be configuration managed? > Is there integration with configuration management tools? > Create a test to get a unit, now alter the name of a parameter, and re-create your test environment. How long can this take? Is it complicated? > Does the tool support database technology and statistical graphs to allow for trend analysis of test execution and code coverage after a while? > Can you test multiple baselines of code with exactly the same set of test cases automatically? > Is distributed testing supported to allow for portions of the tests to be run on different physical machines to speed up testing?
  • 11. Reporting Most tools will give you similar reporting. Minimally, they must create an easy to be aware of report showing the inputs, expected outputs, actual outputs plus a comparison in the expected and actual values. Key Points > What output formats are supported? HTML? Text? CSV? XML? > Is it simple to get both a advanced level (project-wide) report as well as a detailed report for an individual function? > Is the report content user configurable? > Is the report format user configurable? Integration with Other Tools Regardless of the quality or usefulness of the particular tool, all tools should operate in the multi-vendor environment. A lot test of time money has been spent by big companies buying little companies with an idea of offering "the tool" that is going to do everything for everyone. The interesting thing is the fact that most often with your mega tool suites, the whole can be a lot less than the sum from the parts. It seems that companies often take 4-5 pretty cool small tools and integrate them into one bulky and unusable tool. Key Points > Which tools does your candidate tool integrate with out-of-the-box, and will the end-user add integrations? Additional Desirable Features for the Testing Tool The previous sections all describe functionality that should be in any tool which is considered a computerized test tool. In the subsequent few sections we'll list some desirable features, along which has a rationale for your importance in the feature. These features could have varying degrees of applicability in your particular project. True Integration Testing / Multiple Units Under Test Integration exams are an extension of unit testing. It is employed to check interfaces between units and requires you to combine units define some functional process. Many tools claim they can support integration testing by linking the item code legitimate units with the exam harness. This method builds multiple files within quality harness executable but provides no ability to stimulate the functions within these additional units. Ideally, you'd be able to stimulate any function within any unit, in a order within an individual test case. Testing the interfaces between units will usually uncover lots of hidden assumptions and bugs in the application. In fact, integration testing could possibly be a good starting point for those projects which may have no reputation unit testing. Key Points
  • 12. > Can I include multiple units in quality environment? > Can I create complex test scenarios of http://www.testingsoft.com/ these classes where we stimulate a sequence of functions across multiple units within one test case? > Can I capture code coverage metrics for multiple units? Dynamic Stubbing Dynamic stubbing signifies that you can turn individual function stubs off and on dynamically. This allows that you create an evaluation for a single function with all the functions stubbed (even if they happens to the same unit because the function under test). For very complicated code, this is the great feature also it makes testing much simpler to implement. Key Points > Can stubs be chosen with the function level, or exactly the unit level? > Can function stubs be turned while on an off per test case? > Are the function stubs automatically generated (see items in previous section)? Library and Application Level Thread Testing (System Testing) One from the challenges of system testing is that this test stimulus provided towards the fully integrated application may necessitate a user pushing buttons, flipping switches, or typing at a console. If the approval is embedded the inputs could be even more complicated to manage. Suppose you could stimulate your fully integrated application at the function level, similar to how integration testing is done. This would allow that you build complex test scenarios that rely only for the API with the application. Some from the more modern tools allow you to test this way. An additional benefit of the mode of testing is you do not need the source code to test the application. You simply need the definition in the API (usually the header files). This methodology allows testers an automated and scriptable way to perform system testing. Agile Testing and Test Driven Development (TDD) Test Driven Development promises to bring testing in the development process earlier than ever before. Instead of writing application code first and after that your unit tests as an afterthought, you make your tests before the job code. This is really a popular new method of development and enforces the test first and test often approach. Your automated tool should support this method of testing in case you plan to make use of an Agile Development methodology. Bi-directional Integration with Requirements Tools If you worry about associating requirements with test cases, it's desirable for any test tool to integrate having a requirements management tool. If you happen to be interested in this feature, it is important that this interface be bi-directional, to ensure that when requirements are tagged to check cases, the exam case information for example test name and pass / fail status may be pushed back in your requirements database. This will enable you to obtain a sense of the completeness of
  • 13. your needs testing. Tool Qualification If you're operating inside a regulated environment for example commercial aviation or Class III medical devices then you might be obligated to "qualify" the development tools accustomed to build and test your application. The qualification involves documenting what the tool is supposed to accomplish and tests that prove the tool operates in accordance with those requirements. Ideally a vendor will have these materials off-the-shelf as well as a history of customers that have used the qualification data for the industry. Key Points > Does the tool vendor offer qualification materials that are produced for your exact target environment and tool chain? > What projects have proven to work these materials? > How would be the materials licensed? > How are the materials customized and approved for a particular project? > If this is definitely an FAA project hold the qualification materials been successfully accustomed to certify to DO-178B Level A? > If it is surely an FDA project, have the tools been qualified for "intended use"? Conclusion Hopefully this paper provides useful information that helps that you navigate the offerings of test tool vendors. The relative need for http://www.softwaretestingsoftware.com/ each with the items raised will be different for different projects. Our final suggestions are: > Evaluate the candidate tools on code that's representative of the complexity with the code within your application > Evaluate the candidate tools with exactly the same tool chain which will be used on your project > Talk to long-term customers in the vendor and get them many of the questions raised on this paper > Ask regarding the tool technical support team. Try them out by submitting some questions straight to their support (rather than to their salesman) Finally, understand that most every tool can somehow offer the items mentioned within the "Key Points" sections. Your job is to evaluate how automated, easy to make use of, and finish the support is. About Vector Software
  • 14. Vector Software, Inc., may be the leading independent provider of automated software testing tools for developers of safety critical embedded applications. Vector Software's VectorCAST line of products, automate and manage the complex http://www.exambuilder.com/ tasks connected with unit, integration, and system level testing. VectorCAST products support the C, C++, and Ada programming languages.