Glossary Of Testing Terms And Concepts


Published on

This glossary describes the common terms and special terms used in test engineering of software.
Alliance supports organizations with mission critical QA and Testing services. With a focus on speed, accuracy and reliability, our independent Verification and Validation Services ensure delivery of trusted software and uninterrupted business.

Alliance has partnered with dozens of organizations across domains to accelerate their testing services and has taken hundreds of products to market successfully with measurable predictable software quality processes.

To learn more about Alliance QA and Testing Services, please visit our website:

Published in: Technology
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Glossary Of Testing Terms And Concepts

  1. 1. Glossary of Testing Terms and Concepts AGS QA and Testing CoE December 18, 2009
  2. 2. General terms
  3. 3. QA & Software Testing Quality assurance, or QA for short, refers to the systematic monitoring and evaluation of various aspects of a project, program or service, to ensure that standards of quality are being met. Software testing or Quality Control, or QC for short, is the Validation and Verification (V&V) activity aimed at evaluating an attribute or capability of a program or system and determining that it meets the desired results.
  4. 4. Verification Verification (the first V) is the process of evaluating a system or component to determine whether the output of a given development phase satisfies the conditions expected at the start of that phase.
  5. 5. Validation Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.
  6. 6. Test Automation Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.
  7. 7. Types of Test Automation Frameworks The different test automation frameworks available are as follows: Test Script Modularity Test Library Architecture Data-Driven Testing Keyword-Driven or Table-Driven Testing Hybrid Test Automation
  8. 8. Test Script Modularity The test script modularity framework is the most basic of the frameworks. It's a programming strategy to build an abstraction layer in front of a component to hide the component from the rest of the application. This insulates the application from modifications in the component and provides modularity in the application design. When working with test scripts (in any language or proprietary environment) this can be achieved by creating small, independent scripts that represent modules, sections, and functions of the application-under-test. Then these small scripts are taken and combined them in a hierarchical fashion to construct larger tests. The use of this framework will yield a higher degree of modularization and add to the overall maintainability of the test scripts.
  9. 9. Test Library Architecture The test library architecture framework is very similar to the test script modularity framework and offers the same advantages, but it divides the application-under-test into procedures and functions (or objects and methods depending on the implementation language) instead of scripts. This framework requires the creation of library files (SQABasic libraries, APIs, DLLs, and such) that represent modules, sections, and functions of the application-under-test. These library files are then called directly from the test case script. Much like script modularization this framework also yields a high degree of modularization and adds to the overall maintainability of the tests.
  10. 10. Data-Driven Testing A data-driven framework is where test input and output values are read from data files (ODBC sources, CVS files, Excel files, DAO objects, ADO objects, and such) and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script. This is similar to table-driven testing in that the test case is contained in the data file and not in the script; the script is just a "driver," or delivery mechanism, for the data. In data-driven testing, only test data is contained in the data files.
  11. 11. Keyword-Driven Testing This requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application- under-test is documented in a table as well as in step-by- step instructions for each test. In this method, the entire process is data-driven, including functionality.
  12. 12. Hybrid Test Automation Framework The most commonly implemented framework is a combination of all of the above techniques, pulling from their strengths and trying to mitigate their weaknesses. This hybrid test automation framework is what most frameworks evolve into over time and multiple projects. The most successful automation frameworks generally accommodate both Keyword-Driven testing as well as Data-Driven scripts. This allows data driven scripts to take advantage of the powerful libraries and utilities that usually accompany a keyword driven architecture. The framework utilities can make the data driven scripts more compact and less prone to failure than they otherwise would have been.
  13. 13. Errors, Bugs, Defects… Mistake – a human action that produces an incorrect result. Bug, Fault [or Defect] – an incorrect step, process, or data definition in a program. Failure – the inability of a system or component to perform its required function within the specified performance requirement. Error – the difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.
  14. 14. The progression of a software failure A purpose of testing is to expose as many failures as possible before delivering the code to customers.
  15. 15. Test Visibility Black box testing (also called functional testing or behavioral testing) is testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. White box testing (also called structural testing and glass box testing) is testing that takes into account the internal mechanism of a system or component.
  16. 16. Comparing Black and White
  17. 17. Specification Specification – a document that specifies in a complete, precise, verifiable manner, the requirements, design, behavior, or other characteristic of a system or component, and often the procedures for determining whether these provisions have been satisfied. Some examples: Functional Requirements Specification Non-Functional Requirements Specification Design Specification
  18. 18. Testing Scope Functional Requirements (FR), also termed as Business Requirements, of the software or program under test Non-Functional Requirements (NFR) of the software or program under test. These are non-explicit requirements which the software or program is expected to satisfy for end user to be able to use the software or program successfully. Security, Performance, Compatibility, Internationalization, Usability … requirements are examples of NFR. Whitebox or blackbox type of testing to be performed validate that the software or program meets the FR and NFR
  19. 19. Types of Testing
  20. 20. Unit Testing Opacity: White box testing Specification: Low-level design and/or code structure Unit testing is the testing of individual hardware or software units or groups of related units. Using white box testing techniques, testers (usually the developers creating the code implementation) verify that the code does what it is intended to do at a very low structural level.
  21. 21. Integration Testing Opacity: Black- and white-box testing Specification: Low- and High-level design Integration test is testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them. Using both black and white box testing techniques, the tester (still usually the software developer) verifies that units work together when they are integrated into a larger code base. Just because the components work individually, that doesn’t mean that they all work together when assembled or integrated.
  22. 22. Functional Testing Opacity: Black-box testing Specification: High-level design, Requirements specification Using black box testing techniques, testers examine the high-level design and the customer requirements specification to plan the test cases to ensure the code does what it is intended to do. Functional testing involves ensuring that the functionality specified in the requirement specification works.
  23. 23. System Testing Opacity: Black-box testing Specification: High-level design, Requirements specification System testing is testing conducted on a complete, integrated system to evaluate the system compliance with its specified requirements. Because system test is done with a full system implementation and environment, several classes of testing can be done that can examine non-functional properties of the system. It is best when Integration, Function and System testing is done by an unbiased, independent perspective (e.g. not the programmer).
  24. 24. Acceptance Tests Opacity: Black-box testing Specification: Requirements Specification After functional and system testing, the product is delivered to a customer and the customer runs black box acceptance tests based on their expectations of the functionality. Acceptance testing is formal testing conducted to determine whether or not a system satisfies its acceptance criteria (the criteria the system must satisfy to be accepted by a customer) and to enable the customer to determine whether or not to accept the system
  25. 25. Beta Tests Opacity: Black-box testing Specification: None. When an advanced partial or full version of a software package is available, the development organization can offer it free to one or more (and sometimes thousands) potential users or beta testers.
  26. 26. Regression Tests Regression testing is selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements.
  27. 27. Test Plan A test plan is a document describing the scope, approach, resources, and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency plans. An important component of the test plan is the individual test cases.
  28. 28. Test Scenarios/Test Case The term test scenario and test case are often used synonymously. A test case is a set of test steps each of which defines inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test scenarios ensure that all business process flows are tested from end to end.
  29. 29. Test Suite/Scripts The test suite/test script is the combination of a test scenarios, test steps, and test data. Initially the term was derived from the product of work created by automated regression test tools. Test suites/test scripts can be manual, automated, or a combination of both.
  30. 30. Application Terms Application: An application is software with features. Feature / Functional point: A functional point is a feature of the application. Some examples of features might be search, login, signup, and edit preferences. Session: A session is the means of grouping together functional points of a single application. The session keeps state and tracks variables of all the functional points grouped in a single session.
  31. 31. Action Point An action point can be thought of as the act of doing. Take search for example. Usually search only has two UI elements to it, a text box where the search terms are entered and a search button that submits the search terms. The action point doesn't necessarily care about the search results. It only cares that the search page was loaded correctly, the search term was inserted and the search button was clicked. In other words, a action point doesn't validate what happened after the action occurred, it only performs the action.
  32. 32. Validation Point A validation point verifies the outcome of the action point. Usually an action has several possible outcomes. For example, login might behave differently depending on whether the username and password are correct, incorrect, too long, too short or just non-existent. The action point is the act of logging in and the validation point verifies the outcome when using valid, invalid, too long, too short or just non-existent usernames and passwords.
  33. 33. Navigation Point Navigation points traverse to the action point or the validation point so it can be executed. For example, there are websites that have a navigation bar on the top no matter which page is loaded. In this case the navigation point simply clicks on the link in the navigation bar while on any of the pages in order to load the page to perform the action or validation.
  34. 34. Data Flow Testing In data flow-based testing, the control flow graph is annotated with information about how the program variables are defined and used. Different criteria exercise with varying degrees of precision how a value assigned to a variable is used along different control flow paths.
  35. 35. Additional Types of Tests Performance testing Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Usability testing Testing conducted to evaluate the extent to which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. Stress testing Testing conducted to evaluate a system or component at or beyond the limits of its specification or requirement. Smoke test A group of test cases that establish that the system is stable and all major functionality is present and works under “normal” conditions. Robustness testing Testing whereby test cases are chosen outside the domain to test robustness to unexpected, erroneous input.
  36. 36. Thank You Delivering…