Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

- Basis path testing by Hoa Le 6855 views
- Software Testing (ppt file) by Softwarecentral 36177 views
- 10_Testing_Strategies.ppt by Softwarecentral 1363 views
- PPT by Softwarecentral 10447 views
- Debugging Approaches by Paul Gichure 19360 views
- Microsoft PowerPoint - GUI Testing-... by Softwarecentral 4167 views

No Downloads

Total views

74,578

On SlideShare

0

From Embeds

0

Number of Embeds

5,624

Shares

0

Downloads

3,162

Comments

7

Likes

50

No embeds

No notes for slide

- 1. Importance of Testing in SDLC & Various Kinds of Testing
- 2. Software Development Lifecycle <ul><li>All software development can be characterized as a problem solving loop in which </li></ul><ul><li>four distinct stages are encounter: - </li></ul><ul><li>Status quo : “represents the current state of affairs”; </li></ul><ul><li>Problem definition: identifies the specific problem to be solved </li></ul><ul><li>Technical development : solves the problem through the application of some </li></ul><ul><li>technology. </li></ul><ul><li>Solution integration: delivers the results (e.g., documents, programs, data, new </li></ul><ul><li>business function, new product) to those who requested the solution in the first </li></ul><ul><li>Place. </li></ul>
- 3. Waterfall Model Analysis Design Code Test System/information engineering
- 4. The Prototyping Model Listen to customer Build/revise mock-up Customer test-drives mock-up
- 5. The RAD Model Team #1 Team #2 Team #3 Business modeling Data modeling Process modeling Application modeling Test and turnover Business modeling Data modeling Process modeling Application modeling Test and turnover Business modeling Data modeling Process modeling Application modeling Test and turnover
- 6. Boehm’s Spiral Model
- 7. V- Model SRS Unit test Tested modules Integration Test Integrated software System Integration Test Tested software System Test, AcceptanceTest Requirements Specification System Design Detailed Design Coding System Design SRS Module designs Code User Manual
- 8. Importance of Software testing in SDLC <ul><li>Its helps to verify that all the software requirements are implemented correctly or not. </li></ul><ul><li>Identifying defects and ensuring they are addressed before software deployment. Because if any defect will found after deployment and force to fixed it, than the correction cost will much higher than the cost of it fixed it at earlier stage of development. </li></ul><ul><li>Effective testing is demonstrates that software-testing function appear to be working according to specification, that behavioral and performance requirement appear to have been met. </li></ul><ul><li>Whenever any system is developed in different components, its helps to verify the proper integration/interaction of each component to rest of the system. </li></ul><ul><li>Data collection as testing is conducted provide a good indication of software reliability and some indication of software quality as a whole. </li></ul>
- 9. Different Types of Testing <ul><ul><li>Dynamic v/s static testing. </li></ul></ul><ul><ul><li>Development v/s independent testing. </li></ul></ul><ul><ul><li>Black v/s white box testing. </li></ul></ul><ul><ul><li>Behavioral v/s structural testing. </li></ul></ul><ul><ul><li>Automated v/s manual testing. </li></ul></ul><ul><ul><li>Sanity, acceptance and smoke testing . </li></ul></ul><ul><ul><li>Regression testing. </li></ul></ul><ul><ul><li>Exploratory and monkey testing. </li></ul></ul><ul><ul><li>Debugging v/s be bugging. </li></ul></ul>
- 10. Dynamic v/s static <ul><li>Static Testing: This testing refers to testing something that’s not running-Examining and reviewing it. </li></ul><ul><li>Dynamic Testing: This you would normally think of as testing-running and using the software. </li></ul>
- 11. Development v/s independent testing <ul><li>Development testing denotes the aspects of test design and implementation </li></ul><ul><li>most appropriate for the team of developers to undertake. This is in contrast to Independent Testing. In most cases, test execution initially occurs with the developer testing group who designed and implemented the test, but it is a good practice for the developers to create their tests in such a way so as to make them available to independent testing groups for execution. </li></ul><ul><li>Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers. You can consider this distinction a superset, which includes Independent Verification & Validation. In most cases, test execution initially occurs with the independent testing group that designed and implemented the test, but the independent testers should create their tests to make them available to the developer testing groups for execution </li></ul>
- 12. Black v/s white box testing <ul><li>The purpose of a black-box test is to verify the unit's specified function and observable behavior without knowledge of how the unit implements the function and behavior. Black-box tests focus and rely upon the unit's input and output. </li></ul><ul><li>A white-box test approach should be taken to verify a unit's internal structure. Theoretically, you should test every possible path through the code, but that is possible only in very simple units. At the very least you should exercise every decision-to-decision path (DD-path) at least once because you are then executing all statements at least once. A decision is typically an if-statement, and a DD-path is a path between two decisions. </li></ul>
- 13. Behavioral v/s structural testing <ul><li>Behavioral Testing: This is another name commonly given to Black Box Testing as you are testing the behavior of the software when it’s used without knowing the internal logics how they are implemented. </li></ul><ul><li>Structural Testing: This is another name commonly used for white Box testing in which you can see and use the underlying structure of the code to design and run your tests. </li></ul>
- 14. Automated v/s manual <ul><li>Automated Testing: Software testing assisted with software tools that require no operator input, analysis, or evaluation. </li></ul><ul><li>Manual Testing : That part of software testing that requires human input, analysis, or evaluation. </li></ul>
- 15. Sanity, Acceptance and Smoke testing <ul><li>Sanity Testing: Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc. </li></ul><ul><li>Acceptance testing: Acceptance testing is the final test action before deploying the software. The goal of acceptance testing is to verify that the software is ready and can be used by your end users to perform those functions and tasks for which the software was built. </li></ul><ul><li>Smoke Testing: Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details. </li></ul>
- 16. Regression testing <ul><li>The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the modifications and that newly added features have not created problems with previous versions of the software. </li></ul><ul><li>Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. </li></ul>
- 17. Exploratory and monkey testing <ul><li>Exploratory testing involves simultaneously learning, planning, running tests, and reporting / troubleshooting results. </li></ul><ul><li>Monkey testing- This is another name for "Ad Hoc Testing"; it comes from the joke that if you put 100 monkeys in a room with 100 typewriters, randomly punching keys, sooner or later they will type out a Shakespearean sonnet. So every time one of your ad hoc testers finds a new bug, you can toss him a banana. The use of monkey testing is to simulate how your customers will use your software in real time. </li></ul>
- 18. Debugging v/s bebugging <ul><li>Debugging : The process of finding and removing the causes of failures in software. The role is performed by a programmer. </li></ul><ul><li>Bebugging: The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program </li></ul>
- 19. Black Box & White Box Testing Techniques
- 20. Black-Box Testing <ul><li>Program viewed as a Black-box, which accepts some inputs and produces some outputs </li></ul><ul><li>Test cases are derived solely from the specifications, without knowledge of the internal structure of the program. </li></ul>
- 21. Functional Test-Case Design Techniques <ul><li>Equivalence class partitioning </li></ul><ul><li>Boundary value analysis </li></ul><ul><li>Cause-effect graphing </li></ul><ul><li>Error guessing </li></ul>
- 22. Equivalence Class Partitioning <ul><li>Partition the program input domain into equivalence classes (classes of data which according to the specifications are treated identically by the program) </li></ul><ul><li>The basis of this technique is that test of a representative value of each class is equivalent to a test of any other value of the same class. </li></ul><ul><li>Identify valid as well as invalid equivalence classes </li></ul><ul><li>For each equivalence class, generate a test case to exercise an input representative of that class </li></ul>
- 23. Example <ul><li>Example: input condition 0 <= x <= max </li></ul><ul><ul><li>valid equivalence class : 0 <= x <= max </li></ul></ul><ul><ul><li>invalid equivalence classes : x < 0, x > max </li></ul></ul><ul><li>3 test cases </li></ul>
- 24. Guidelines for Identifying Equivalence Classes <ul><li>Input Condition Valid Eq Classes Invalid Eq Classes </li></ul><ul><li>range of values one valid two inavlid </li></ul><ul><li>(eg. 1 - 200) (value within range) (one outside each end of range) </li></ul><ul><li>number N valid one valid two invalid </li></ul><ul><li>values (none, more than N) </li></ul><ul><li>Set of input values one valid eq class one </li></ul><ul><li>each handled for each value (eg. any value not </li></ul><ul><li>differently by the in valid input set ) </li></ul><ul><li>program (eg. A, B, C) </li></ul>
- 25. Guidelines for Identifying Equivalence Classes <ul><li>Input Condition Valid Eq Classes Invalid Eq Classes </li></ul><ul><li>must be condition one one </li></ul><ul><li>(e.g. Id name must begin (e.g.. it is a letter) (e.g.. it is not a letter) </li></ul><ul><li>with a letter ) </li></ul><ul><li>If you know that elements in an equivalence class are not handled identically by the program, split the equivalence class into smaller equivalence classes. </li></ul>
- 26. Identifying Test Cases for Equivalence Classes <ul><li>Assign a unique number to each equivalence class </li></ul><ul><li>Until all valid equivalence classes have been covered by test cases, write a new test case covering as many of the uncovered valid equivalence classes as possible. </li></ul><ul><li>Each invalid equivalence class cover by a separate test case. </li></ul>
- 27. Boundary Value Analysis <ul><li>Design test cases that exercise values that lie at the boundaries of an equivalence class and for situations just beyond the ends. </li></ul><ul><li>Example: input condition 0 <= x <= max </li></ul><ul><li>Test for values : 0, max ( valid inputs) </li></ul><ul><li>: -1, max+1 (invalid inputs) </li></ul>
- 28. Cause Effect Graphing <ul><li>A technique that aids in selecting test cases for combinations of input conditions in a systematic way. </li></ul>
- 29. Cause Effect Graphing Technique <ul><li>1. Identify the causes (input conditions) and effects (output conditions) of the program under test. </li></ul><ul><li>2. For each effect, identify the causes that can produce that effect. Draw a Cause-Effect Graph. </li></ul><ul><li>3. Generate a test case for each combination of input conditions that make some effect to be true. </li></ul>
- 30. Example <ul><li>Consider a program with the following: </li></ul><ul><li>Input conditions Output conditions </li></ul><ul><li>c1: command is credit e1: print invalid command </li></ul><ul><li>c2: command is debit e2: print invalid A/C </li></ul><ul><li>c3: A/C is valid e3: print debit amount not valid </li></ul><ul><li>c4: Transaction amount not e4: debit A/C </li></ul><ul><li>valid e5: credit A/C </li></ul>
- 31. Example: Cause-Effect Graph C1 C2 C3 C4 E1 E2 E3 E5 E4 and and or and not and and not and and not not
- 32. Example: Cause-Effect Graph C1 C2 C 3 C4 E1 E2 E5 E4 not not not not and E3 and and or and and and and
- 33. Example <ul><li>Decision table showing the combinations of input conditions that make an effect true. (Summarized from Cause Effect Graph) </li></ul><ul><li>Write test cases to exercise each Rule in decision Table. </li></ul>Example: C1 C2 C3 C4 0 0 - - 1 - 0 - - 1 1 0 - 1 1 1 1 - 1 1 E1 E2 E3 E4 E5 1 1 1 1 1
- 34. Error Guessing <ul><li>From intuition and experience, enumerate a list of possible errors or error prone situations and then write test cases to expose those errors. </li></ul>
- 35. White Box Testing <ul><li>White box testing is concerned with the degree to which test cases exercise or cover the logic (source code) of the program. </li></ul><ul><li>White box Test case design techniques </li></ul><ul><li>Statement coverage Basis Path Testing </li></ul><ul><li>Decision coverage Loop testing </li></ul><ul><li>Condition coverage Decision-condition coverage Multiple condition coverage Data flow testing </li></ul>
- 36. White Box Test-Case Design <ul><li>Statement coverage </li></ul><ul><ul><li>write enough test cases to execute every statement at least once </li></ul></ul><ul><ul><li>TER (Test Effectiveness Ratio) </li></ul></ul><ul><ul><li>TER1 = statements exercised / total statements </li></ul></ul>
- 37. Example <ul><li>void function eval (int A, int B, int X ) </li></ul><ul><li>{ </li></ul><ul><li>if ( A > 1) and ( B = 0 ) </li></ul><ul><li>then X = X / A; </li></ul><ul><li>if ( A = 2 ) or ( X > 1) </li></ul><ul><li>then X = X + 1; </li></ul><ul><li>} </li></ul><ul><li>Statement coverage test cases: </li></ul><ul><li>1) A = 2, B = 0, X = 3 ( X can be assigned any value) </li></ul>
- 38. <ul><li>Decision coverage </li></ul><ul><ul><li>write test cases to exercise the true and false outcomes of every decision </li></ul></ul><ul><ul><li>TER2 = branches exercised / total branches </li></ul></ul><ul><li>Condition coverage </li></ul><ul><ul><li>write test cases such that each condition in a decision takes on all possible outcomes atleast once </li></ul></ul><ul><ul><li>may not always satisfy decision coverage </li></ul></ul>White Box Test-Case Design
- 39. Example <ul><li>void function eval (int A, int B, int X ) </li></ul><ul><li>{ </li></ul><ul><li>if ( A > 1) and ( B = 0 ) then </li></ul><ul><li>X = X / A; </li></ul><ul><li>if ( A = 2 ) or ( X > 1) then </li></ul><ul><li>X = X + 1; </li></ul><ul><li>} </li></ul><ul><li>Decision coverage test cases: </li></ul>2) A = 2, B = 1, X = 1 ( abe ) 1) A = 3, B = 0, X = 3 (acd) A > 1 and B = 0 A = 2 or X > 1 X = X+1 X = X/ A a c T F b e T F d
- 40. Example <ul><li>Condition coverage test cases must cover conditions </li></ul><ul><li>A>1, A<=1, B=0, B !=0 </li></ul><ul><li>A=2, A !=2, X >1, X<=1 </li></ul><ul><li>Test cases: </li></ul><ul><li>1) A = 1, B = 0, X = 3 (abe) </li></ul><ul><li>2) A = 2, B = 1, X = 1 (abe) </li></ul><ul><li>does not satisfy decision coverage </li></ul>X = X+1 a c T F b e T F d A > 1 and B = 0 A = 2 or X > 1 X = X/ A
- 41. White Box Test-Case Design <ul><li>Decision Condition coverage </li></ul><ul><ul><li>write test cases such that each condition in a decision takes on all possible outcomes at least once and each decision takes on all possible outcomes at least once </li></ul></ul><ul><li>Multiple Condition coverage </li></ul><ul><ul><li>write test cases to exercise all possible combinations of True and False outcomes of conditions within a decision </li></ul></ul>
- 42. Example <ul><li>Decision Condition coverage test cases must cover conditions </li></ul><ul><li>A>1, A<=1, B=0, B !=0 </li></ul><ul><li>A=2, A !=2, X >1, X<=1 </li></ul><ul><li>also ( A > 1 and B = 0) T, F </li></ul><ul><li>( A = 2 or X > 1) T, F </li></ul><ul><li>Test cases: </li></ul><ul><li>1) A = 2, B = 0, X = 4 (ace) </li></ul><ul><li>2) A = 1, B = 1, X = 1 (abd) </li></ul>X = X+1 a c T F b e T F d A > 1 and B = 0 A = 2 or X > 1 X = X/ A
- 43. Example <ul><li>Multiple Condition coverage must cover conditions </li></ul><ul><li>1) A >1, B =0 5) A=2, X>1 </li></ul><ul><li>2) A >1, B !=0 6) A=2, X <=1 </li></ul><ul><li>3) A<=1, B=0 7) A!=2, X > 1 </li></ul><ul><li>4) A <=1, B!=0 8) A !=2, X<=1 </li></ul><ul><li>Test cases: </li></ul><ul><li>1) A = 2, B = 0, X = 4 (covers 1,5) </li></ul><ul><li>2) A = 2, B = 1, X = 1 (covers 2,6) </li></ul><ul><li>3) A = 1, B = 0, X = 2 (covers 3,7) </li></ul><ul><li>4) A = 1, B = 1, X = 1 (covers 4,8) </li></ul>
- 44. Basis Path Testing <ul><li>1. Draw control flow graph of program from the program detailed design or code. </li></ul><ul><li>2. Compute the Cyclomatic complexity V(G) of the flow graph using any of the formulas: </li></ul><ul><li>V(G) = #Edges - #Nodes + 2 </li></ul><ul><li>or V(G) = #regions in flow graph </li></ul><ul><li>or V(G) = #predicates + 1 </li></ul>
- 45. Example 1 2 3 4 5 10 6 7 8 9 R4 R3 R2 11 12 13 R1 R6 R5 V(G) = 6 regions V(G) = #Edges - #Nodes + 2 = 17 - 13 + 2 = 6 V(G) = 5 predicate-nodes + 1 = 6 6 linearly independent paths
- 46. Basis Path Testing ( contd ) <ul><li>3. Determine a basis set of linearly independent paths. </li></ul><ul><li>4. Prepare test cases that will force execution of each path in the Basis set. </li></ul><ul><li>The value of Cyclomatic complexity provides an upper bound on the number of tests that must be designed to guarantee coverage of all program statements. </li></ul>
- 47. Loop Testing <ul><li>Aims to expose bugs in loops </li></ul><ul><li>Fundamental Loop Test criteria </li></ul><ul><ul><li>1) bypass the loop altogether </li></ul></ul><ul><ul><li>2) one pass through the loop </li></ul></ul><ul><ul><li>3) two passes through the loop before exiting </li></ul></ul><ul><ul><li>4) A typical number of passes through the loop, unless covered by some other test </li></ul></ul>
- 48. Loop Testing <ul><li>Nested loops </li></ul><ul><ul><li>1) Set all but one loop to a typical value and run through the single-loop cases for that loop. Repeat for all loops. </li></ul></ul><ul><ul><li>2) Do minimum values for all loops simultaneously. </li></ul></ul><ul><ul><li>3) Set all loops but one to the minimum value and repeat the test cases for that loop. Repeat for all loops. </li></ul></ul><ul><ul><li>4) Do maximum looping values for all loops simultaneously. </li></ul></ul>
- 49. Data Flow Testing <ul><li>Select test paths of a program based on the Definition-Use (DU) chain of variables in the program. </li></ul><ul><li>Write test cases to cover every DU chain is at least once. </li></ul>
- 50. Thank You… <ul><li>Any Questions ???? </li></ul>

No public clipboards found for this slide

by Solanke Patil.............