Software Testing


Published on

software testing, testing, software engineering

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Software Testing

  1. 1. SOFTWARE TESTING Introduction to Software Testing By
  2. 2. WHAT IS SOFTWARE TESTING? Software testing is a process of verifying and validating that a software application or program 1. Meets the business and technical requirements that guided its design and development, and 2. Works as expected.
  3. 3. SOFTWARE TESTING HAS THREE MAIN PURPOSES: VERIFICATION, VALIDATION, AND DEFECT FINDING.  The verification process confirms that the software meets its technical specifications.  The validation process confirms that the software meets the business requirements.  A defect is a variance between the expected and actual result.
  4. 4. WHY DO SOFTWARE TESTING?  In February 2003 the U.S. Treasury Department mailed 50,000 Social Security checks without a beneficiary name. A spokesperson said that the missing names were due to a software program maintenance error.  In July 2001 a “serious flaw” was found in off-the-shelf software that had long been used in systems for tracking U.S. nuclear materials. The software had recently been donated to another country and scientists in that country discovered the problem and told U.S. officials about it.  In October 1999 the $125 million NASA Mars Climate Orbiter—an interplanetary weather satellite—was lost in space due to a data conversion error. Investigators discovered that software on the spacecraft performed certain calculations in English units (yards) when it should have used metric units (meters).  In June 1996 the first flight of the European Space Agency's Ariane 5 rocket failed shortly after launching, resulting in an uninsured loss of $500,000,000. The disaster was traced to the lack of exception handling for a floating-point error when a 64-bit integer was converted to a 16-bit signed integer.
  5. 5. WHAT DO WE TEST?  Testing can involve some or all of the following factors.  Business requirements  Functional design requirements  Technical design requirements  Regulatory requirements  Programmer code  Systems administration standards and restrictions  Corporate standards  Professional or trade association best practices  Hardware configuration  Cultural issues and language differences
  6. 6. WHO DOES THE TESTING?  Software testing is not a one person job. It takes a team, but the team may be larger or smaller depending on the size and complexity of the application being tested. The programmer(s) who wrote the application should have a reduced role in the testing if possible.  Testers must be cautious, curious, critical but non- judgmental and good communicators.
  7. 7. SOFTWARE TEST PLANNING  The quality of Software testing effort depends on the quality of quality of Software Testing Planning. Software testing planning is very critical and important part of Software Testing Process.
  8. 8. BELOW ARE SOME QUESTIONS AND SUGGESTIONS FOR SOFTWARE TEST PLANNING: - Have you planned for an overall testing schedule and the personnel required and associated training requirements? Have the test team members been given assignments? Have you designed at least one black-box test case for each system function? - Have you designed test cases for verifying quality objectives/factors (e.g. reliability, maintainability, etc.)? - Have you designed test cases for verifying resource objectives? - Have you defined test cases for performance tests, boundary tests, and usability tests? - Have you designed test cases for stress tests (intentional attempts to break system)? - Have you designed test cases with special input values (e.g. empty files)? - Have you designed test cases with default input values?
  9. 9. - Do all test cases agree with the specification of the function or requirement to be tested? - Have you sufficiently considered error cases? Have you designed test cases for invalid and unexpected input conditions as well as valid conditions? - Have you defined test cases for white-box-testing (structural tests)? - Have you stated the level of coverage to be achieved by structural tests? - Have you unambiguously provided test input data and expected test results or expected messages for each test case? - Have you documented the purpose of and the capability demonstrated by each test case? - Is it possible to meet and to measure all test objectives defined (e.g. test coverage)? - Have you defined the test environment and tools needed for executing the software test? - Have you described the hardware configuration resources needed to implement the designed test cases? - Have you described the software configuration needed to implement the designed test cases?
  10. 10. TEST CHARACTERISTICS  A good test has a high probability of finding an error. To achieve this goal, the tester must understand the software and attempt to develop a mental picture of how the software might fail.  A good test is not redundant. Testing time and resources are limited. There is no point in conducting a test that has the same purpose as another test. Every test should have a different purpose.  A good test should be “best of breed”. The test that has the highest likelihood of uncovering a whole class of errors should be used.  A good test should be neither too simple nor too complex. Each test should be executed separately
  11. 11. THE TEST PLAN  The test plan is a mandatory document. You can’t test without one. For simple, straight- forward projects the plan doesn’t have to be elaborate but it must address certain items.
  12. 12. SOFTWARE TESTING PRINCIPLES  Testing must be done by an independent party. Testing should not be performed by the person or team that developed the software since they tend to defend the correctness of the program.  Assign best personnel to the task. Because testing requires high creativity and responsibility only the best personnel must be assigned to design, implement, and analyze test cases, test data and test results.  Testing should not be planned under the tacit assumption that no errors will be found.  Test for invalid and unexpected input conditions as well as valid conditions. The program should generate correct messages when an invalid test is encountered and should generate correct results when the test is valid.
  13. 13.  The probability of the existence of more errors in a module or group of modules is directly proportional to the number of errors already found.  Testing is the process of executing software with the intent of finding errors.  Keep software static during test. The program must not be modified during the implementation of the set of designed test cases.  Document test cases and test results.  Provide expected test results if possible. A necessary part of test documentation is the specification of expected results, even if providing such results is impractical.
  14. 14. SOFTWARE TESTING GOALS  To locate and prevent bugs as early as possible  To perform all Tests according to the Requirements, in the most effective and economic way  To bring the software product to a level of quality that is appropriate for the client
  16. 16. TEST STRATEGIES FOR CONVENTIONAL SOFTWARE  Unit Testing: Unit testing focuses verification effort on the smallest unit of software design—the software component or module. Using the component- level design description as a guide, important control paths are tested to uncover errors within the boundary of the module.
  17. 17. INTEGRATION TESTING  It is of two types:  Top-down Integration Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program).
  18. 18. BOTTOM-UP INTEGRATION  Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure). Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
  19. 19. REGRESSION TESTING:  Regression testing means rerunning test cases from existing test suites to build confidence that software changes have no unintended side-effects.
  20. 20. VALIDATION TESTING  At the culmination of integration testing, software is completely assembled as a package, interfacing errors have been uncovered and corrected and a final series of software tests—validation testing—may begin. Validation can be defined in many ways, but a simple (albeit harsh) definition is that validation succeeds when software functions in a manner that can be reasonably expected by the customer.
  21. 21. VALIDATION TESTING CRITERIA: Alpha Testing  The alpha test is conducted at the developer's site by a customer. The software is used in a natural setting with the developer "looking over the shoulder" of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment. Beta Testing  The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not present.
  22. 22. SYSTEM TESTING  System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all work to verify that system elements have been properly integrated and perform allocated functions.
  23. 23.  Recovery Testing  Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed.  Security Testing  Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. To quote Beizer "The system's security must, of course, be tested for invulnerability from frontal attack—but must also be tested for invulnerability from flank or rear attack."
  24. 24. STRESS TESTING  Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, (1) special tests may be designed that generate ten interrupts per second, when one or two is the average rate, (2) input data rates may be increased by an order of magnitude to determine how input functions will respond, (3) test cases that require maximum memory or other resources are executed, (4) test cases that may cause thrashing in a virtual operating system are designed, (5) test cases that may cause excessive hunting for disk-resident data are created. Essentially, the tester attempts to break the program.
  25. 25. PERFORMANCE TESTING  Performance testing is designed to test the run- time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process.