Approximately 10 to 25 percent of a system’s development and maintenance effort is put toward developing and maintaining documentation. It is important to ensure that the right documentation has been prepared, is complete and current, reflects the criticality of the system, and contains all necessary elements. If any part of the documentation is not current, the tester must assume that none of it may be current and look for other sources to substantiate what has been developed and how it works. The testing of documentation should conform to other aspects of systems testing. Documentation is as prone to error as computer programs. The difference is that defective programs usually leads to defective results, whereas defective documentation may not. However, defective documentation is a time bomb; it can cause systems to be improperly changed or system output to be improperly used. Both of these errors can lead to incorrect system results. Source:“Effective Methods for Software Testing” by William Perry
<ul><li>The concerns regarding computer systems documentation are that the documentation will fail to: </li></ul><ul><li>Bring discipline to the performance of an IT function </li></ul><ul><li>Assist in planning and managing resources </li></ul><ul><li>Help in planning and implementing security procedures </li></ul><ul><li>Assist auditors in evaluating application systems </li></ul><ul><li>Help transfer knowledge of software development throughout the life cycle </li></ul><ul><li>Promote common understanding and expectation about the system within the organization </li></ul><ul><li>Define what is expected and verify that that is what is delivered </li></ul><ul><li>Provide basis for training individual in how to maintain software </li></ul><ul><li>Provide managers with technical documents to determine that requirements have been met </li></ul>
<ul><li>Testing the adequacy of system documentation consists of the following tasks: </li></ul><ul><li>Measure project documentation needs </li></ul><ul><li>Determine what documents must be produced </li></ul><ul><li>Determine the completeness of individual documents </li></ul><ul><li>Determine how current project documents are </li></ul>
Measure project documentation needs The formality, extend and level of detail of the documentation to be prepared depends on the organization’s management practices and the project’s size, complexity and and risk. What is adequate for one project may be inadequate for another. The first task in testing documentation is to test the sufficiency or adequacy of the documentation produced. Too much documentation can also be wasteful. An important part of testing documentation is to determine first that the right documentation is prepared; there is little value is conforming that unneeded documentation is adequately prepared.
Determine what documents must be produced Method for determining the level of documentation needed: Level 1 (minimal): documentation guidelines are applicable to single-use programs of minimal complexity. Suggested cost criteria for programs that are categorized as level 1 are programs requiring less than one worker-month of effort. Level 2 (internal): documentation applies to special purpose programs that after careful considerations, appear to have no sharing potential and to be designed for use only by the requesting department. Large programs with a short life expectancy also fall into this category. The effort spent toward formal documentation for level 2 programs should be minimal.
Level 3 (working document) documentation applies to programs that are expected to be used by many people in the same organization or that may be transmitted on request to other organization, contractors etc. These documents should undergo a more stringent documentation review. Level 4 (formal publication) documentation applies to programs that are of sufficient general interest and documentations are critical to programs operations. These documents should be formally reviewed, tested and subject to configuration control procedures.
<ul><li>Determine the completeness of individual documents </li></ul><ul><li>Testers must determine whether each document is adequately prepared based on the following: </li></ul><ul><li>Documentation content </li></ul><ul><li>Document audience: the information must be presented with the level of detail appropriate to the audience </li></ul><ul><li>Information should be included in each document types is completed </li></ul>
<ul><li>Testing the completeness of documentation </li></ul><ul><li>In testing the completeness of documentation, tests will reveal whether: </li></ul><ul><li>The documentation is understandable to an independent person </li></ul><ul><li>An independent person can use the documentation to correctly operate the system in a efficient, effective manner. </li></ul>
Creating the software testing environment is not a trivial task. Each phase of software development has a parallel testing activity. Testers create test cases from the documents produced at each development phase. Requirements Design specification Unit specifications Implement the units System test Integration test Unit test Define system tests Define integration tests Define unit tests
<ul><li>The requirements document provides input to define system test cases and drives the design phase </li></ul><ul><li>The design phase consists of refining the design from high level down to detailed level. Each design level defines a part of the system, thus requires integration tests to ensure that each component works as an incremental element. </li></ul><ul><li>The unit phase provides the specifications and eventually the code for each unit. Unit specifications are used to defines unit tests </li></ul>
Unit testing Unit testing consists of verifying each individual unit in isolation by running tests in an artificial environment. Most people divide unit tests into two categories: white box and black box. Developers use the code’s inner structure and control flow to construct white box tests. Black box tests derive from the requirements and other specifications, without any knowledge of the application’s internal structure and control. White box tests find bugs, but tests based on the code’s internal structure pose the danger of verifying the code works as written, without ensuring that the logic is correct. This is where black box tests come in to ensure that specific inputs yield the correct expected outcome. Unit testing is the first opportunity for exercising source code. By evaluating each unit in isolation, and ensuring that each works on its own, one can more easily pinpoint problems than if the unit were part of a larger system
Integration testing A software integration strategy defines the order in which to merge individual units. Integration is a process that starts with a set of units each individually tested in isolation and ends when the entire application (or sub-system) has been built. Integration testing verifies that the combined units function together correctly. This can facilitate finding problems that occur at interfaces or communication between the individual parts. Integrating software and integration testing typically are parallel activities. As a component is added to the growing system, tests verify that the interim configuration works as expected prior to adding more components.
<ul><li>System testing </li></ul><ul><li>System testing verifies the entire product, validates it according to the original project requirements. Some of the major categories of system tests include: </li></ul><ul><ul><ul><li>Compatibility testing </li></ul></ul></ul><ul><ul><ul><li>Configuration testing </li></ul></ul></ul><ul><ul><ul><li>Functional testing </li></ul></ul></ul><ul><ul><ul><li>Installation testing </li></ul></ul></ul><ul><ul><ul><li>Load testing </li></ul></ul></ul><ul><ul><ul><li>Performance testing </li></ul></ul></ul><ul><ul><ul><li>Recovery testing </li></ul></ul></ul><ul><ul><ul><li>Reliability testing </li></ul></ul></ul><ul><ul><ul><li>Security testing </li></ul></ul></ul><ul><ul><ul><li>Serviceability testing </li></ul></ul></ul><ul><ul><ul><li>Stress testing </li></ul></ul></ul><ul><ul><ul><li>Usability testing </li></ul></ul></ul>
Regression testing Regression testing consists of reusing a subset of previously executed tests on new versions of the application. The goal is to ensure that features that worked on previous versions still work as expected. Adding new features sometimes may invalidate old regression tests. Testers may need to update existing tests to account for new product features. Running a regression test re-checks the integrity of the modified application. In an idea test environment, the tester re-executes regression tests every time the application changes.
Acceptance testing Acceptance testing validates the system against the user’s requirements and ensures that the application is ready for operational use. The phase of testing occurs after the completing of system testing. Acceptance tests consist of typical user scenarios focusing on major functional requirements. Acceptance tests are often executed at the customer site for final handoff. Acceptance testing is the final checkpoint prior to delivery. The end user may often execute the acceptance tests, which are often a selected subset of system test cases that are run in the real environment.
Software testing requires more than simply creating and executing tests. Before beginning to test, tester must devise the overall test strategy, including how to record problems found during testing. Ideally tester creates a test plan at every level of testing, from system level through integration, down to unit level testing. A system test plan describes the requirements, resources, strategies, and schedule for testing an application. The information contained in the test plan aides the tester in acquiring necessary resources for creating the test environment and in defining the approach for creating the tests.
Problem reporting system Problem reporting is a process for initiating fixes, enhancements, approvals and for tracking the progress of change. This presents a method for managing change and for minimizing the impact of rework, which are essential for controlling quality. Many commercial problem reporting tools exist today. These tools provide metrics and reports to identify deficiencies and to monitor test status. A problem reporting system is the primary means of communication between testers and developers. The tester records every problem found and provides detailed information for reproducing the problem, although some problems are not easily reproducible. Once the developer has fixed the problem and a new release is available for testing, the tester re-executes the associated case to ensure that the problem has been fixed.
Test reporting The primary goal of a test report is to described what occurred during the test activities. A typical test report identifies the configurations and test environment used, and then enumerates the tests executed and their results This document provides an audit trail of what the tester accomplished while testing the application. Metrics derived from the test statistics help determine the application’s readiness for use.
<ul><li>A good system test plan defines the following: </li></ul><ul><ul><li>Overview of schedule for test activities </li></ul></ul><ul><ul><li>Approach to testing, including usage of test tools </li></ul></ul><ul><ul><li>Test tools; including how and when to obtain them </li></ul></ul><ul><ul><li>Process by which to conduct tests and report results </li></ul></ul><ul><ul><li>System test entry and exist criteria </li></ul></ul><ul><ul><li>Personnel required to design, develop and execute tests </li></ul></ul><ul><ul><li>Equipment resources – what machines and test benches are needed </li></ul></ul><ul><ul><li>Test coverage goal, where appropriate </li></ul></ul><ul><ul><li>Special configuration of software and hardware needed for tests </li></ul></ul><ul><ul><li>Strategies for testing the application </li></ul></ul><ul><ul><li>Features that will and will not be tested </li></ul></ul><ul><ul><li>Risks and contingency plans </li></ul></ul>
We have now finished “software testing” part of the lecture, starting with next lecture, we will be discussing software configuration systems, software maintenance and how to assure software reliability. Homework assignment (03/25/04) Please read chapters 13, 14