Leaping over the Boundaries of Boundary Value Analysis


Published on

Many books, articles, classes, and conference presentations tout equivalence class partitioning and boundary value analysis as core testing techniques. Yet many discussions of these techniques are shallow and oversimplified. Testers learn to identify classes based on little more than hopes, rumors, and unwarranted assumptions, while the "analysis" consists of little more than adding or subtracting one to a given number. Do you want to limit yourself to checking the product's behavior at boundaries? Or would you rather test the product to discover that the boundaries aren't where you thought they were, and that the equivalence classes aren't as equivalent as you've been told? Join Michael Bolton as he jumps over the partitions and leaps across the boundaries to reveal a topic far richer than you might have anticipated and far more complex than the simplifications that appear in traditional testing literature and folklore.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Leaping over the Boundaries of Boundary Value Analysis

  1. 1. T2 Test Techniques 5/8/2014 9:45:00 AM Leaping over the Boundaries of Boundary Value Analysis Presented by: Michael Bolton DevelopSense Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  2. 2. Michael Bolton DevelopSense Tester, consultant, and trainer Michael Bolton is the co-author (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement, with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy. Prior to DevelopSense, he was with Quarterdeck Corporation, where he managed the company’s flagship products and directed project and testing teams-both in-house and worldwide. Contact Michael at michael@developsense.com.
  3. 3. 1 Boundaries Abound, Boundlessly: “If it ain’t exploratory, it’s avoidatory” James Bach james@satisfice.com http://www.satisfice.com This material benefited from review and conversation at the 4th Workshop on Heuristic and Exploratory Techniques. The following people contributed to that conference: Jon Bach, James Bach, Robert Sabourin, Karen Johnson, Cem Kaner, Henrik Andersson, Keith Stobie, Scott Barber, David Gilbert, Doug Hoffman, Mike Kelly, Harry Robinson, Ross Collard, Dawn Haynes, Timothy Coulter, and Michael Bolton Michael Bolton mb@developsense.com http://www.developsense.com 4.3.2 Boundary value analysis (K3) “Behavior at the edge of each equivalence partition is more likely to be incorrect, so boundaries are an area where testing is likely to yield defects. The maximum and minimum values of a partition are its boundary values. When designing test cases, a value on each boundary is chosen.” ISTQB Foundation Syllabus
  4. 4. 2 Classical Boundary Testing is Based on Specific Theories of Error  comparison operators are easy to screw up  array indices are easy to screw up − Visual Basic’s OPTION BASE directive made the screw-ups even easier  buffer sizes are easy to screw up  buffer overflows happen in the real world  limit checks are easy to forget  signed and unsigned data types are easily confused  error checking and recovery is often poorly thought-out These are simple, plausible errors. They also have the virtue of being easy to teach. A Message From The Programmer
  5. 5. 3 A System Test API API API API API GUI Database Serializer Serializer (<256 bytes) What Have We Discovered?  Confusion about data types and limits  Misdeclared boundaries  Upstream boundaries  Dynamic boundaries for which limits change  Latent (previously unnoticed) boundaries  Boundaries that are revealed by specific actions  Parafunctional boundaries  Conditional boundaries  Interaction boundaries  Timing boundaries
  6. 6. 4 Why Procedures Can’t Suffice: Input Constraint Testing Here’s an older version of The Famous Triangle Program. We see interesting differences in how testers approach this task. How Well Do Text Fields Handle Long Inputs?  What does “long” mean?  What does “handle well” mean?  What will users do? What will they expect?  So what?
  7. 7. 5 Max String Sizes Chosen by 39 Testers (no limit was specified) 1 10 100 1000 10000 100000 1 4 7 10 13 16 19 22 25 28 31 34 37 Cycle 1 Cycle 2 For cycle two we specifically asked students to try long inputs Interesting Lengths  16 digits & up: loss of mathematical precision.  23 digits & up: can’t see all of the input.  310 digits & up: input not understood as a number.  1,000 digits & up: exponentially increasing freeze when navigating to the end of the field by pressing <END>.  23,829 digits & up: all text in field turns white.  2,400,000 digits: crash (reproducible). Only the first two boundaries were known to the programmer!
  8. 8. 6 16 digits and up 23 digits and up
  9. 9. 7 And more!  16 digits & up: loss of mathematical precision.  23 digits & up: can’t see all of the input.  310 digits & up: input not understood as a number.  1,000 digits & up: exponentially increasing freeze when navigating to the end of the field by pressing <END>.  23,829 digits & up: all text in field turns white.  2,400,000 digits: crash (reproducible). Only the first two boundaries were known to the programmer! What stops testers from trying longer inputs?  They’re seduced by entering data up to the visible limits of the field.  They think they need a spec that tells them the max.  If they have a spec or script, they stop when the spec or script says stop.  If they have access to a programmer, they accept the programmer’s model.  They’re satisfied by the first boundary bug they find (16 digits).  They let their fingers do the walking instead of using a program like Notepad or PerlClip to generate input.  The use a strictly linear lengthening strategy.  They don’t realize the significance of degradation.  They assume more thorough testing will be too hard and take too long.  They think “No one would do that” (hackers do it).  The literature persistently tells them that one extra byte is enough.
  10. 10. 8 The Boundary Risk Hypothesis: All other things being equal, any given thing is more likely to be misclassified, mishandled, or misdirected when “near” a suspected boundary than when “far away.” This hypothesis arises because we are aware of specific mechanisms of boundary-related failure, over and above other mechanisms. But the actual boundaries in a product may not be the ones we are told about. That’s why we must explore. Toward a Better Definition of Boundary Testing  Boundary testing: Any testing to the extent that it involves the evaluation or discovery of boundaries or boundary-related behavior.  Boundary: 1. A dividing point between two otherwise contiguous regions of behavior; or 2. A principle, mechanism, or event by which things are classified into different sets that may be confused with other sets.
  11. 11. 9 Overcoming Objections  Do we always have to do all this stuff?  Do we have to do this stuff on every build?  But we have too many tests already!  With so many boundaries, how will we have time for everything? Possible Answers  use automation in an exploratory way, as well as in a confirmatory way  use tools: PerlClip, scripting languages, or Notepad files  reduce prescriptive documentation  resist the urge to document every test  resist the urge to repeat every test on every build  reduce the load of prescribed tests  take concise notes, and identify the value of logging  note or record tests that reveal problems  record detail to the extent that other people WILL (not might) need the guidance