There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation , not merely a matter of creating and following routine procedure.
One definition of testing is:
"the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester.
Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product— putting the product through its paces
The quality of the application can, and normally does, vary widely from system to system but some of the common quality attributes include:
reliability, efficiency, portability, maintainability and usability
In general, software engineers distinguish software faults from software failures
When software does not operate as it is intended to do, a software failure is said to occur
Software failures are caused by one or more sections of the software program being incorrect. Each of these incorrect sections is called a software fault . The fault could be as simple as a wrong value. A fault could also be complete omission of a decision in the program.
The number of potential test cases is huge. For example, in the case of a simple program that multiplies two integer numbers:
if each integer is a 32-bit number (a common size for the computer representation), then there are 2 32 possible values for each number
This means, the total number of possible input combinations is 2 64 ,which is more than 10 19 ,
If a test case can be done each microsecond (10 -6 ), then it will take hundreds of thousands of years to try all of the possible test cases. Trying all possible test cases is called exhaustive testing and is usually not a reasonable approach because of the size of the task.
Regardless of the methods used or level of formality involved, the desired result of testing is a level of confidence in the software so that the developers are confident that the software has an acceptable defect rate
What constitutes an acceptable defect rate depends on the nature of the software.
An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner
In black box testing the test engineer only accesses the software through the same interfaces that the customer or user would, or possibly through remotely controllable, automation interfaces that connect another computer or another process into the target of the test
For example a test harness might push virtual keystrokes and mouse or other pointer operations into a program through any inter-process communications mechanism, with the assurance that these events are routed through the same code paths as real keystrokes and mouse clicks.
In recent years the term grey box testing has come into common usage.
The typical grey box tester is permitted to set up or manipulate the testing environment, like seeding a database, and can view the state of the product after their actions, like performing a SQL query on the database to be certain of the values of columns.
The process of testing an integrated system to verify that it meets specified requirements. Testing to determine that the results generated by the enterprise's information systems and their components are accurate and the systems perform to specification.
Many testing methods regard all-pairs testing of a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order combinatorial testing methods and less exhaustive methods which fail to exercise all possible pairs of parameters
Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as unit testing, fuzz testing, and code review.
is often used in large software development projects that perform black box testing.
These usually have a budget to develop test tools and fuzz testing is one of the techniques which offers a high benefit to cost ratio.
Fuzz testing is also used as a gross measurement of a large software system's quality.
The advantage here is that the cost of generating the tests is relatively low. For example, third party testers have used fuzz testing to evaluate the relative merits of different operating systems and application programs.
Fuzz testing is thought to enhance software security and software safety because it often finds odd oversights and defects which human testers would fail to find and even careful human test designers would fail to create tests for.
However, fuzz testing is not a substitute for exhaustive testing or formal methods:
It can only provide a random sample of the system's behavior and in many cases passing a fuzz test may only demonstrate that a piece of software handles exceptions without crashing, rather than behaving correctly
Thus, fuzz testing can only be regarded as a proxy for program correctness, rather than a direct measure, with fuzz test failures actually being more useful as a bug-finding tool than fuzz test passes as an assurance of quality.
As a practical matter, developers need to reproduce errors in order to fix them. For this reason, almost all fuzz testing makes a record of the data it manufactures, usually before applying it to the software, so that if the computer fails dramatically, the test data is preserved.
Modern software has several different types of inputs:
Event driven inputs are usually from a graphical user interface, or possibly from a mechanism in an embedded system.
Character driven inputs are from files, or data streams.
Database inputs are from tabular data, such as relational databases.
A heuristic evaluation is a usability testing method for computer software that helps to identify usability problems in the user interface (UI) design.
It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the "heuristics").
These evaluation methods are now widely taught and practiced in the New Media sector, where UIs are often designed in a short space of time on a budget that may restrict the amount of money available to provide for other types of interface testing.
Quite often, usability problems that are discovered are categorized according to their estimated impact on user performance or acceptance
Often the heuristic evaluation is conducted in the context of use cases (typical user tasks), to provide feedback to the developers on the extent to which the interface is likely to be compatible with the intended users’ needs and preferences.
Collections of test cases are sometimes incorrectly termed a test plan. They may also be called a test script, or even a test scenario.
An executable test suite is a test suite that is ready to be executed
This usually means that there exists a test harness that is integrated with the suite and such that the test suite and the test harness together can work on a sufficiently detailed level to correctly communicate with the system under test (SUT).
In computer science a monkey test is a unit test that runs with no specific test in mind.
The monkey in this case is the producer of any input data (whether that be file data, or input device data).
Examples of monkey test unit tests can vary from simple random string entry into text boxes (to ensure handling of all possible user input), to garbage files (for checking against bad loading routines that have blind faith in their data)
A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test has five key characteristics.
(a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate.
There is considerable controversy among testing writers and consultants about what constitutes responsible software testing.
Members of the “context-driven school of testing believe that there are no "best practices" for testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation.
This belief directly contradicts standards such as the IEEE 829 test documentation standard, and organisations such as the US FDA who promote them.
The agile testing movement (which includes but is not limited to forms of testing practiced on agile development projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers.
However, saying that "maturity models" like CMM gained ground against or opposing Agile testing may not be right.
The Agile movement is a 'way of working', while CMM are a process improvement idea.
There are two main disadvantages associated with a primarily exploratory testing approach.
The first is that there is no opportunity to prevent defects, which can happen when the designing of tests in advance serves as a form of structured static testing that often reveals problems in system requirements and design.
The second is that, even with test charters, demonstrating test coverage and achieving repeatability of tests using a purely exploratory testing approach is difficult.
4 major categories of software users, (entities within an application’s environment that are capable of sending the application input or consuming its output).
Note that of the four major categories of users, only one is visible to the human tester’s eye: the user interface. The interfaces to the kernel, the ﬁle system and other software components happen without scrutiny.