Glossary of Testing Terms and Concepts


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Glossary of Testing Terms and Concepts

  1. 1. Glossary of TestingTerms and Concepts AGS QA and Testing CoE December 18, 2009
  2. 2. General terms
  3. 3. QA & Software Testing Quality assurance, or QA for short, refers to the systematic monitoring and evaluation of various aspects of a project, program or service, to ensure that standards of quality are being met. Software testing or Quality Control, or QC for short, is the Validation and Verification (V&V) activity aimed at evaluating an attribute or capability of a program or system and determining that it meets the desired results.
  4. 4. VerificationVerification (the first V) isthe process of evaluating asystem or component todetermine whether theoutput of a givendevelopment phasesatisfies the conditionsexpected at the start of thatphase.
  5. 5. ValidationValidation is the process ofevaluating a system orcomponent during or at theend of the developmentprocess to determinewhether it satisfiesspecified requirements.
  6. 6. Test AutomationTest automation is the use of software to control theexecution of tests, the comparison of actual outcomes topredicted outcomes, the setting up of test preconditions, andother test control and test reporting functions.Commonly, test automation involves automating a manualprocess already in place that uses a formalized testingprocess.
  7. 7. Types of Test AutomationFrameworks The different test automation frameworks available are as follows: Test Script Modularity Test Library Architecture Data-Driven Testing Keyword-Driven or Table-Driven Testing Hybrid Test Automation
  8. 8. Test Script Modularity The test script modularity framework is the most basic of the frameworks. Its a programming strategy to build an abstraction layer in front of a component to hide the component from the rest of the application. This insulates the application from modifications in the component and provides modularity in the application design. When working with test scripts (in any language or proprietary environment) this can be achieved by creating small, independent scripts that represent modules, sections, and functions of the application-under-test. Then these small scripts are taken and combined them in a hierarchical fashion to construct larger tests. The use of this framework will yield a higher degree of modularization and add to the overall maintainability of the test scripts.
  9. 9. Test Library Architecture The test library architecture framework is very similar to the test script modularity framework and offers the same advantages, but it divides the application-under-test into procedures and functions (or objects and methods depending on the implementation language) instead of scripts. This framework requires the creation of library files (SQABasic libraries, APIs, DLLs, and such) that represent modules, sections, and functions of the application-under-test. These library files are then called directly from the test case script. Much like script modularization this framework also yields a high degree of modularization and adds to the overall maintainability of the tests.
  10. 10. Data-Driven Testing A data-driven framework is where test input and output values are read from data files (ODBC sources, CVS files, Excel files, DAO objects, ADO objects, and such) and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script. This is similar to table-driven testing in that the test case is contained in the data file and not in the script; the script is just a "driver," or delivery mechanism, for the data. In data-driven testing, only test data is contained in the data files.
  11. 11. Keyword-Driven Testing This requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application- under-test is documented in a table as well as in step-by- step instructions for each test. In this method, the entire process is data-driven, including functionality.
  12. 12. Hybrid Test AutomationFramework The most commonly implemented framework is a combination of all of the above techniques, pulling from their strengths and trying to mitigate their weaknesses. This hybrid test automation framework is what most frameworks evolve into over time and multiple projects. The most successful automation frameworks generally accommodate both Keyword-Driven testing as well as Data-Driven scripts. This allows data driven scripts to take advantage of the powerful libraries and utilities that usually accompany a keyword driven architecture. The framework utilities can make the data driven scripts more compact and less prone to failure than they otherwise would have been.
  13. 13. Errors, Bugs, Defects… Mistake – a human action that produces an incorrect result. Bug, Fault [or Defect] – an incorrect step, process, or data definition in a program. Failure – the inability of a system or component to perform its required function within the specified performance requirement. Error – the difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.
  14. 14. The progression of a softwarefailureA purpose of testing is to expose as many failures as possiblebefore delivering the code to customers.
  15. 15. Test Visibility Black box testing (also called functional testing or behavioral testing) is testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. White box testing (also called structural testing and glass box testing) is testing that takes into account the internal mechanism of a system or component.
  16. 16. Comparing Black and White
  17. 17. SpecificationSpecification – a document that specifies in a complete,precise, verifiable manner, the requirements, design,behavior, or other characteristic of a system or component,and often the procedures for determining whether theseprovisions have been satisfied.Some examples: Functional Requirements Specification Non-Functional Requirements Specification Design Specification
  18. 18. Testing Scope Functional Requirements (FR), also termed as Business Requirements, of the software or program under test Non-Functional Requirements (NFR) of the software or program under test. These are non-explicit requirements which the software or program is expected to satisfy for end user to be able to use the software or program successfully. Security, Performance, Compatibility, Internationalization, Usability … requirements are examples of NFR. Whitebox or blackbox type of testing to be performed validate that the software or program meets the FR and NFR
  19. 19. Types of Testing
  20. 20. Unit Testing Opacity: White box testing Specification: Low-level design and/or code structure Unit testing is the testing of individual hardware or software units or groups of related units. Using white box testing techniques, testers (usually the developers creating the code implementation) verify that the code does what it is intended to do at a very low structural level.
  21. 21. Integration Testing Opacity: Black- and white-box testing Specification: Low- and High-level design Integration test is testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them.Using both black and white box testing techniques, the tester (still usually the software developer) verifies that units work together when they are integrated into a larger code base.Just because the components work individually, that doesn’t mean that they all work together when assembled or integrated.
  22. 22. Functional Testing Opacity: Black-box testing Specification: High-level design, Requirements specification Using black box testing techniques, testers examine the high-level design and the customer requirements specification to plan the test cases to ensure the code does what it is intended to do.Functional testing involves ensuring that the functionality specified in the requirement specification works.
  23. 23. System Testing Opacity: Black-box testing Specification: High-level design, Requirements specification System testing is testing conducted on a complete, integrated system to evaluate the system compliance with its specified requirements.Because system test is done with a full system implementation and environment, several classes of testing can be done that can examine non-functional properties of the system.It is best when Integration, Function and System testing is done by an unbiased, independent perspective (e.g. not the programmer).
  24. 24. Acceptance Tests Opacity: Black-box testing Specification: Requirements Specification After functional and system testing, the product is delivered to a customer and the customer runs black box acceptance tests based on their expectations of the functionality.Acceptance testing is formal testing conducted to determine whether or not a system satisfies its acceptance criteria (the criteria the system must satisfy to be accepted by a customer) and to enable the customer to determine whether or not to accept the system
  25. 25. Beta Tests Opacity: Black-box testing Specification: None. When an advanced partial or full version of a software package is available, the development organization can offer it free to one or more (and sometimes thousands) potential users or beta testers.
  26. 26. Regression TestsRegression testing is selective retesting of a system orcomponent to verify that modifications have notcaused unintended effects and that the system orcomponent still complies with its specifiedrequirements.
  27. 27. Test PlanA test plan is a document describing the scope,approach, resources, and schedule of intended testactivities. It identifies test items, the features to betested, the testing tasks, who will do each task, andany risks requiring contingency plans. An importantcomponent of the test plan is the individual test cases.
  28. 28. Test Scenarios/Test Case The term test scenario and test case are often used synonymously. A test case is a set of test steps each of which defines inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test scenarios ensure that all business process flows are tested from end to end.
  29. 29. Test Suite/ScriptsThe test suite/test script is the combination of a testscenarios, test steps, and test data. Initially the termwas derived from the product of work created byautomated regression test tools. Test suites/testscripts can be manual, automated, or a combination ofboth.
  30. 30. Application Terms Application: An application is software with features. Feature / Functional point: A functional point is a feature of the application. Some examples of features might be search, login, signup, and edit preferences. Session: A session is the means of grouping together functional points of a single application. The session keeps state and tracks variables of all the functional points grouped in a single session.
  31. 31. Action PointAn action point can be thought of as the act of doing. Takesearch for example. Usually search only has two UI elementsto it, a text box where the search terms are entered and asearch button that submits the search terms.The action point doesnt necessarily care about the searchresults. It only cares that the search page was loadedcorrectly, the search term was inserted and the search buttonwas clicked.In other words, a action point doesnt validate what happenedafter the action occurred, it only performs the action.
  32. 32. Validation PointA validation point verifies the outcome of the action point.Usually an action has several possible outcomes.For example, login might behave differently depending onwhether the username and password are correct, incorrect,too long, too short or just non-existent.The action point is the act of logging in and the validationpoint verifies the outcome when using valid, invalid, too long,too short or just non-existent usernames and passwords.
  33. 33. Navigation PointNavigation points traverse to the action point or the validationpoint so it can be executed.For example, there are websites that have a navigation bar onthe top no matter which page is loaded.In this case the navigation point simply clicks on the link in thenavigation bar while on any of the pages in order to load thepage to perform the action or validation.
  34. 34. Data Flow TestingIn data flow-based testing, the control flow graph isannotated with information about how the programvariables are defined and used. Different criteriaexercise with varying degrees of precision how avalue assigned to a variable is used along differentcontrol flow paths.
  35. 35. Additional Types of TestsPerformance testing Testing conducted to evaluate the compliance of a system or component with specified performance requirements.Usability testing Testing conducted to evaluate the extent to which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component.Stress testing Testing conducted to evaluate a system or component at or beyond the limits of its specification or requirement.Smoke test A group of test cases that establish that the system is stable and all major functionality is present and works under “normal” conditions.Robustness testing Testing whereby test cases are chosen outside the domain to test robustness to unexpected, erroneous input.
  36. 36. Thank YouDelivering…