Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Software testing

1,371 views

Published on

Software testing

Published in: Engineering
  • Be the first to comment

Software testing

  1. 1. “Testing is the process of executing a program with the intent of finding errors.”
  2. 2. Error: Is a synonym for mistake May be a syntax error or misunderstanding of specifications. Sometimes, they may be logical errors. They propagate from one phase to another. For e.g., a requirement error may be magnified during design, and amplified more during coding. If it is not detected before release, it may have serious implications later. An error may lead to one or more faults.
  3. 3. Bug: When developers make mistakes while coding, these mistakes are called as bugs. If the source code contains a fault, we call it a bug. Failure: A failure occurs when a fault executes. It is the variation of the output of the program from the expected output. Failure is dynamic. The program has to execute for a failure to occur.
  4. 4. Test & Test Case: These terms are same in practice and treated as synonyms. Test case contains an input description and an expected output description. Inputs are of two types: Pre-Conditions, circumstances before the test case execution. Actual Inputs, that are identified by some testing methods. Outputs are also of two types: Post-Conditions, and Actual Outputs.
  5. 5. Test & Test Case: A good test case has a high probability of finding an error. Test cases are valuable and useful. They need to be developed, reviewed, used, managed, and saved. Test Suite: The set of test cases is called as a test suite. It may contain all possible test cases. We may also have a test suite of effective/good test cases. Any combination of test cases may generate a test suite.
  6. 6. Verification: As per IEEE, “Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of the phase.” Hence verification activities are applied to early phases of SDLC to review the documents generated at the end of every phase to ensure that we get what we expected.
  7. 7. Validation: As per IEEE, “Validation is the process of evaluating a system or component during or at the end of development process to determine whether it satisfies the specified requirements.” Therefore, validation requires actual execution of the program. It is also known as computer based testing.
  8. 8. Hence testing includes both verification & Validation. Testing= Verification + Validation Verification minimizes the errors and their impact in early phases of development.
  9. 9. Acceptance Testing: This term is used when software is developed for a specific customer. A series of tests are conducted to enable the customer to validate all the requirements These tests are conducted by the end user/ customer and may range from ad hoc tests to well planned systematic tests. Acceptance testing may be conducted for few weeks or months. The discovered errors will be fixed and better quality software will be delivered to the customer.
  10. 10. Alpha & Beta Testing: These terms are used when the software is being developed as a product for anonymous customers. Formal acceptance testing is not possible in these cases. Some potential customers are identified to get their views about the product.
  11. 11. Alpha Testing: The alpha tests are conducted at the developer’s site by the customer. These are conducted in a controlled environment. It may be started when formal testing process is near completion.
  12. 12. Beta Testing: The beta tests are conducted by the customers at their own site. The developer is not present when these tests are conducted. It is conducted in the real environment which cannot be controlled by the developer. Customers are expected to report failures to the company. After receiving such failures, developers modify the code, remove the bugs, and prepare the product for final release.
  13. 13. Refers to testing which involves only observation of the output for certain input values. This testing is based on the functionality of the program. There is no attempt to analyse the code which produces the output. We ignore the internal structure of the code. Therefore functional testing is also called as Black Box Testing in which contents of the black box are not known.
  14. 14. Functionality of the black box is understood completely in terms of its inputs and outputs. Input Test Data System Under Test Output Test Data Input Domain Output Domain Black Box Testing
  15. 15. There are a number of strategies to design test cases for black box testing. A few of them are: Boundary Value Analysis Robustness Testing
  16. 16. Test cases that are close to boundary conditions have higher chances of detecting an error. Here, boundary condition means, An input value may be on the boundary, Just below the boundary (from upper side), or Just above the boundary (from lower side). Suppose we have an input variable ‘x’ with a range from 1-100. The boundary values are: 1, 2,99, and 100.
  17. 17. Consider a program with two input variables, x and y. These input variables have specified boundaries as: a <= x <= b c <= y <= d Hence, both inputs x and y are bounded by two intervals [a, b] and [c, d] respectively.
  18. 18. For input x, we may design test cases with values a and b, just above a, and also just below b. Similarly, For input y, we may design test cases with values c and d, just above c, and also just below d. The input domain for our program is shown below: a b c d x y
  19. 19. The basic idea for boundary analysis is to use input variable values at their: Minimum, Just above minimum, A nominal value, Just below maximum, and At their maximum. Here, we have an assumption of reliability theory, known as “Single fault” assumption. This says that failures are rarely a result of simultaneous occurrence of two or more faults.
  20. 20. Thus boundary value analysis test cases are obtained by: Holding the value of all but one variable at their nominal value, and Letting that variable assume the extreme values. So the boundary value analysis test cases for our program with two input variables, ‘x’ and ’y’ that may have any value from 100-300 are: (200, 100), (200, 101), (200, 200), (200, 299), (200, 300), (100, 200), (101,200), (299, 200), and (300, 200)
  21. 21. 400 300 200 100 0 100 200 300 400 x y Each dot represents a test case & inner rectangle is the domain of legal inputs. Thus, for a program of ‘n’ variables, boundary value analysis produces (4n+1) test cases.
  22. 22. It is an extension of boundary value analysis. Here we try to see, what happens when the extreme values are exceeded to a slightly greater value than the maximum and a slightly less value than the minimum. It means we want to go outside the legal boundary of input domain. This type of testing is common in electric and electronic circuits.
  23. 23. 400 300 200 100 0 100 200 300 400 x y There are four additional test cases which are outside the legal boundary. Therefore, total test cases in robustness testing are (6n+1), where ‘n’ is the number of input variables.
  24. 24. Sometimes also called as glass box testing. It permits us to examine the internal structure of the program. It guarantees that all independent paths within a module have been exercised at least once. It executes all the loops at their boundaries & within their operational bounds. It exercises internal data structures to ensure their validity. It also exercises all logical decisions on their true & false values.
  25. 25. White box testing may be Static or Dynamic. Dynamic white box testing is When we want to look into the program, examine the code and watch it, as it runs. It is about testing a running program. Static white box testing is When we want to test the program without running, i.e., just by examining it and reviewing it. It is a process of carefully & methodically reviewing the software design, architecture, or code for bugs without executing it. It is sometimes also called as structural analysis.
  26. 26. White box testing is necessary even after black box testing because: There might be parts of code that have not been fully exercised by white box testing. There may also be sections of code which are extra to the requirements. This means that we have checked the program in functional tests and no errors are found, then further checking by structural testing may reveal a code that is not even needed by the specifications and hence not examined by functional testing. It may be regarded as an error since it a deviation from the requirements.
  27. 27. Basic Path Testing: Is the name given to test techniques based on selecting a set of test paths in the program and testing them. For example, pick enough paths to ensure that every source statement is executed at least once. It is most applicable to new software for module testing or unit testing. It requires complete knowledge of the program’s structure and used by developers to unit test their own code. The effectiveness of path testing reduces as the size of code increases.
  28. 28. Basic Path Testing: For the developer it is the basic testing technique. This type of testing involves: Generating a set of paths that will cover every branch in the program. Finding a set of test cases that will execute every path in this set. Path generation can be performed through static analysis of the program control flow.
  29. 29. Flow Graph: Comes under basic path testing. The control flow of a program can be analysed using a graphical representation known as flow graph. The flow graph is a directed graph in which: Nodes are either entire statements or fragments of a statement. Edges represent flow of control. A flow graph can easily be generated from the code of any problem.
  30. 30. Sequence If-then-else While Loop Repeat Until Loop Switch Statement ...
  31. 31. int ab(int x, int y){ 1. while(x!=y) 2. if(x>y) then 3. x=x-y 4. else y=y-x 5. } 6. return x 1 2 3 4 5 6
  32. 32. Cyclomatic Complexity: Is also known as structural complexity. It is also called as McCabe’s complexity. It is a quantitative measure of logical complexity of a program. It is an example of white box testing. It is based on graph theory. The value of Cyclomatic complexity defines number of independent paths. Generally its value will be less than 10, if it is more than 10, there is a need to break it into smaller modules.
  33. 33. Cyclomatic Complexity: Is also known as structural complexity. It gives the internal view of the code. This approach is used to find the number of independent paths through a program. This provides as the upper bound for the number of test cases to ensure that all statements have been executed at least once and every condition has been tested on its true and false side.
  34. 34. Cyclomatic Complexity: Cyclomatic metric of a graph G can be given by one of the three ways: Number of regions in the flow graph. V(G) = e-n + 2 Where, ‘n’ is the number of vertices in the graph. ‘e’ is the number of edges in the graph. V(G)= P+1, P is the number of predicate nodes. Predicate node is the one which has two outgoing edges.
  35. 35. There are three levels of testing: Unit Testing Integration Testing System Testing
  36. 36. It is the process of taking a module and running it in isolation from the rest of the software by using prepared test cases and comparing the actual results with the predicted results. Unit testing is beneficial because: The size of a single module is small enough to locate an error easily. We can test the module rigorously. Confusing interactions with multiple errors in various modules are eliminated.
  37. 37. The problem with unit testing is that How do we run a module without anything to call it, To be called by it, or To output immediate values obtained during the execution? One approach to solve this problem is To construct an appropriate driver routine to call it, and To create Simple stubs to be called by it, and To insert output statements in it. A stub is a dummy sub program that is used to replace modules that are subordinate to the module being tested.
  38. 38. A stub uses the subordinate module’s interface, may do minimal data manipulation, verify the entry and return. This overhead code, called as scaffolding represents the effort that is important to testing but does not appear in the delivered product. If drivers and stubs are kept simple, the overhead is relatively low. Unfortunately, many modules cannot be tested correctly with low overhead, so in such cases, complete testing can be postponed until integration testing.
  39. 39. Integration testing must be performed because: The purpose of unit testing is to determine that each independent module is correctly implemented. This gives little chance to determine that the interface between modules is also correct. One specific target of integration testing is the interface: Whether parameters match on both sides as with the type, Permissible ranges, etc.
  40. 40. There are several classical integration strategies for integration testing: Top-Down Integration, proceeds down the hierarchy adding one module at a time until an entire tree level is integrated, and thus eliminates the need for drivers. Bottom-Up Integration, works similarly from the bottom and has no need of stubs. A Sandwich Strategy runs from top and bottom concurrently, meeting somewhere in the middle.
  41. 41. Top-Down Integration Bottom-Up Integration
  42. 42. Sandwich Integration
  43. 43. The system level is closest to everyday experience. We evaluate a product in terms of our expectations and not with respect to specifications or standards. The goal is not to find faults but to demonstrate performance. Because of this system testing approach is a functional approach. Testing the system capabilities is more important than testing its components.
  44. 44. During system testing we evaluate a number of attributes of the software which are: Attribute Description Usable Is the product convenient, clear, and predictable? Secure Is the access to sensitive data restricted to those with authorization? Compatible Will the product work correctly in conjunction with existing data, software, and procedures? Dependable Do adequate safeguards against failure and methods for recovery exist in the product? Documented Are manuals complete, correct, and understandable?

×