Unit 2: Functional Testing Reading: PJR Ch. 5

696 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
696
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Unit 2: Functional Testing Reading: PJR Ch. 5

  1. 1. References: Software Testing and Analysis: Process, Principles, and Techniques (Pezzè and Young) Why Programs Fail (Zeller) Software Testing: Principles and Practices(Desikan and Ramesh) How We Test Software at Microsoft (Page, Johnston, and Rollison) Unit 2: Functional Testing Reading: PJR Ch. 5
  2. 2. Functional Testing  Functional testing uses the specification (formal or informal) to systematically create tests  Timely  Can write tests before code is written  Effective  Finds faults that can elude other approaches  Best for catching “missing logic” bugs  Widely applicable  Any description of program behavior serving as spec  Any level of granularity from unit to system testing 2
  3. 3. Functional Testing Goals  Problem: Cannot test all possible input combinations  Need to choose tests…  Intelligently  Systematically  Thoroughly  Key idea: Partition the input space into equivalence classes.  All inputs within the equivalence class have the same functionality.  Sufficient to run one test in each equivalence class (not really).  Can be tricky to divide the input space into equivalence classes.  Better to err on the side of too many tests than too few. 3
  4. 4. Systematic Functional Testing Failure (valuable test case) No failure The space of possible input values 4
  5. 5. Functional Testing Approach 1. Identify independently testable features in the program. 2. Find equivalence classes for each feature identified in step 1. 3. Apply boundary value analysis to create one or more test case specifications for each equivalence class. 4. Select other potential interesting values based on previous experience, intuition. 5. Apply combinatorial analyses to create test case specifications that combine the independently testable features. 5
  6. 6. Independently Testable Features  Decompose subject under test into independently testable features (ITF).  For functions:  Each parameter may be an ITF.  Closely coupled parameters may be considered as one ITF.  For integration testing:  Each unit is an ITF.  For system testing:  ITFs are exposed through user interfaces or APIs.  ITFs refer to different “phases” of the program. 6
  7. 7. Class Problem: Testable Features  Consider a multi- function calculator  What are the independently testable features? 7
  8. 8. Equivalence Classes  Need to divide the input space for each ITF into equivalence classes.  Based on the desired / specified functionality of the subject under test.  Examples:  Computer science students need to meet with their advisor if their GPA is less than 2.75 or if they are in their first year at SU.  Income tax rate is dependent on income.  Volume discount if you buy more than 1000 units. 8
  9. 9. Class Problem: Next Date  The next date program accepts a date as an input and returns the next date.  It only works on dates in the years 1700-3000.  What are equivalence values for each of the three inputs? 9
  10. 10. Boundary Value Analysis Possible test case  For ordered inputs, history suggests that values near the boundary are most error prone.  What are good values for an input that expects integers from -10 to 10? 10
  11. 11. Boundary Value Analysis  For each boundary n, want to have three cases:  n – 1, n, n +1  For a range of values from x to y, have at least six cases:  x – 1, x, x + 1, y – 1, y , y + 1  Some values are outside of the range:  May also be part of another equivalence class (worry about redundancy later)  May be illegal - should check to make sure it’s illegal  May be impossible to test  Example: drop-down menu  Is there another way to enter the value? 11
  12. 12. Boundaries  Boundary analysis can be applied to more situations than handling numeric data. Examples? 12
  13. 13. Hidden Boundaries  What about “unbounded” data types?  Integers with no specified range  Size of “growing” data structures  What about hidden boundaries?  The size of the window in a GUI  Data type limits  Memory limits  Operating system limits 13
  14. 14. Special Values  Within an equivalence class, we have identified values near the boundary. However, we don’t want to necessarily stop there.  Should have at least one test in the middle.  Sometimes the boundaries are either trivial or handled differently in code.  Using prior experience and intuition, look at other values likely to cause problems:  0 and negative numbers  values of a different type (if possible)  n +1 digit numbers when only n digits are needed  other values based on the subject under test  paying close attention to inputs considered invalid 14
  15. 15. Class Problem: Next Date Month Day Year 15
  16. 16. Catalog Based Testing  Catalogs capture the experience of test designers by listing important cases for each possible type of variable  A catalog lists kinds of elements that can occur in a specification  Each catalog entry is associated with a list of generic test case specifications Example:  catalog entry Boolean  two test case specifications: true, false  label if applicable only to: input, output, both 16
  17. 17. A simple catalog (part I)  Boolean  True in/out  False in/out  Enumeration  Each enumerated value in/out  Some value outside the enumerated set in  Integer  large negative number in/out  small negative number in/out  0 in/out  small positive number in/out  large positive number in/out 17
  18. 18. A simple catalog (part II)  Range L ... U  L-1 in  L in/out  A value between L and U in/out  U in/out  U+1 in  Numeric Constant C  C in/out  C–1 in  C+1 in  Any other constant compatible with C in  Non-Numeric Constant C  C in/out  Any other constant compatible with C in  Some other compatible value in 18
  19. 19. A simple catalog (part III)  Sequence (such as a sorted linked list)  Empty in/out  A single element in/out  More than one element in/out  Maximum length (if bounded) or very long in/out  Longer than maximum length (if bounded) in  Incorrectly terminated in  Traversal for item P in list  P occurs at beginning of list in  P occurs in interior of list in  P occurs at end of list in  P does not occur in list in  Insertion of item P in list  P inserted at beginning of list out  P inserted in interior of list out  P inserted at end of list out 19
  20. 20. Combinatorial Testing  Programs / units generally have more than one input.  Test cases need to take care of combinations of different inputs.  A program may fail on a particular combination of inputs  Combinatorial testing: systematically generate combinations to be tested  Uses the representative values for each ITF. 20
  21. 21. Example: Next Date  How many different combinations are there?  Month:  Day:  Year:  Total:  How could you legitimately reduce the number of combinations? 21
  22. 22. Introducing Constraints  Generally each invalid input case needs to be executed once (with all other inputs valid)  Overkill to test things like April 0, 3005  Some combinations of valid input values are invalid  Example: February 31  Suffices to test each of these once  Look for uninteresting cases – cases that can be grouped together.  Example: The year does not matter except in Feb. & Dec.  Common constraints:  Error or invalid conditions  Input x is ignored when input y is 0  “Empty” data structure 22
  23. 23. Pairwise Testing  Even when intelligent constraints are applied, the number of resulting combinations can be too large.  Pairwise Testing: generate combinations that efficiently cover all pairs of classes  Rationale: most failures are triggered by single values or combinations of a few values.  Efficient – fewest number of test cases  Covering pairs reduces the number of test cases, but reveals most faults  N-way Combinatorial Testing: generate all n-tuples of input combinations  N=2 for pairwise testing 23
  24. 24. Combinatorial Testing of Configurations  A common use of combinatorial testing is for configuration testing:  Operating system  Environment variables  Other pieces of software  Example:  Browser could be “IE” or “Firefox”  Operating system could be “Vista”, “XP”, or “Windows 7”  Systematically generate combinations to be tested  IE on Vista, IE on XP, Firefox on Vista, Firefox on OSX, ... 24
  25. 25. Example: Pairwise Testing  How many tests are needed such that each pair of inputs is tested at least once?  OS: XP, Vista, Windows 7  Browser: IE, Firefox  Version of Library X: 1.0, 1.1, 1.2, 2.0  Has program Y installed: Yes, No 25
  26. 26. Example: One possible solution Library X OS Browser Program Y? 1.0 XP IE Y 1.0 Vista Firefox Y 1.0 Windows 7 Firefox N 1.1 XP Firefox Y 1.1 Vista Firefox N 1.1 Windows 7 IE N 1.2 XP IE N 1.2 Vista IE Y 1.2 Windows 7 Firefox Y 2.0 XP Firefox N 2.0 Vista IE N 2.0 Windows 7 IE Y 26
  27. 27. Pairwise Testing  Not always the case where the number of combinations is the product of the number of cases of the two inputs with the most cases.  In general, generating a list of combinations is hard to do manually.  Too many combinations  Too many constraints to contend with  Process can be automated using tools  Tradeoff: manually specifying constraints vs. more tests 27

×