• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
5PTRH
 

5PTRH

on

  • 509 views

 

Statistics

Views

Total Views
509
Views on SlideShare
509
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Anyone affected by a program is a stakeholder. A person who can influence development decisionsis a stakeholder with influence.

5PTRH 5PTRH Presentation Transcript

  • Black Box Software Testing 2004 (Fall) PART 17 -- EXPLORATORY TESTING by Cem Kaner, J.D., Ph.D. Professor of Software Engineering Florida Institute of Technology and James Bach Principal, Satisfice Inc. Copyright (c) Cem Kaner & James Bach, 2000-2004 This work is licensed under the Creative Commons Attribution-ShareAlike License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA. These notes are partially based on research that was supported by NSF Grant EIA-0113539 ITR/SY+PE: "Improving the Education of Software Testers." Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
  • What is Testing? A technical investigation done to expose quality-related information about the product under test
  • Information objectives
    • Find defects
    • Maximize bug count
    • Block premature product releases
    • Help managers make ship / no-ship decisions
    • Minimize technical support costs
    • Assess conformance to specification
    • Conform to regulations
    • Minimize safety-related lawsuit risk
    • Find safe scenarios for use of the product
    • Assess quality
    • Verify correctness of the product
    • Assure quality
    Different testing strategies serve different information objectives.
  • Exploratory testing is useful for discovering new information
  • Exploratory Testing
    • Simultaneously:
      • Learn about the product
      • Learn about the market
      • Learn about the ways the product could fail
      • Learn about the weaknesses of the product
      • Learn about how to test the product
      • Test the product
      • Report the problems
      • Advocate for repairs
      • Develop new tests based on what you have learned so far.
  • Exploratory Testing
    • Exploratory testing is not a testing technique. It's a way of thinking about testing.
    Any testing technique can be used in an exploratory way.
  • Exploratory Testing Tasks are the Usual Testing Tasks Learning Execute Tests Design Tests Product (coverage) Techniques Quality (oracles) Discover the elements of the product. Discover how the product should work. Discover test design techniques that can be used. Decide which elements to test. Observe product behavior. Speculate about possible quality problems. Evaluate behavior against expectations. Configure & operate the product. Select & apply test design techniques. Testing notes Tests Problems Found
  • Exploratory Testing
    • Every competent tester does some exploratory testing.
    • For example -- bug regression:
    • Report a bug, the programmer claims she fixed the bug, so you test the fix.
      • Start by reproducing the steps you used in the bug report to expose the failure.
      • Then vary your testing to search for side effects.
        • These variations are not predesigned. This is an example of chartered exploratory testing.
    • (In a chartered exploratory testing session, you start the session by selecting a "charter" -- a focus or objective that will guide your work. The session doesn't have to stick to the charter--you follow up interesting leads as you come to them--but the charter helps you set your strategy, and whenever you ask, "what am I going to do next?" the place you look is your charter.)
  • Exploratory Testing Continuum Plenty of work is done between the extremes of the continuum. pure scripted freestyle exploratory When I say “exploratory testing” and don’t qualify it, I mean anything on the exploratory side of this continuum. charters vague scripts fragmentary test cases roles
  • So how do we do the exploration?
  • The Exploratory Approach is Heuristic
    • A heuristic is a fallible method for:
      • attempting to solve a problem, or
      • attempting to reach a decision.
    • Both meanings are common in computing.
    • Heuristics can come into play consciously or unconsciously.
      • A heuristic may be used as a conscious tool in order to address a problem for which no deterministic solution is available or to address the problem or reach a decision more quickly or cheaply than the available deterministic solution.
      • Unconscious heuristics are often called biases. They change the probability of a perception or decision or behavior in response to a stimulus or situation.
    • It can be impossible for an observer to tell whether a heuristic is operating consciously or unconsciously. (Example, racial profiling).
    • Whether conscious or unconscious, heuristics are fallible. They do not guarantee a correct solution, they may contradict other heuristics, and their acceptance or application depends on the immediate context, not on general rules.
  • Heuristics
    • Definition of the engineering method: "the strategy for causing the best change in a poorly understood or uncertain situation within the available resources." (p. 5)
    • "A heuristic is anything that provides a plausible aid or direction in the solution of a problem but is in the final analysis unjustified, incapable of justification, and fallible. It is used to guide, to discover and to reveal. This statement is a first approximation, or as the engineer would say, a first cut, at defining a heuristic.
    • "Although difficult to define, a heuristic has four signatures that make it easy to recognize:
      • A heuristic does not guarantee a solution;
      • It may contradict other heuristics;
      • It reduces the search time in solving a problem; and
      • Its acceptance depends on the immediate context instead of on an absolute absolute standard." (p. 16-17)
        • Billy Vaughn Koen, Definition of the Engineering Method, ASEE, 1985.
  • All test techniques are heuristics
  • Oracles are heuristics Our rules for classing behaviors as bugs are heuristics (the consistency heuristics) Bach's test strategy model is a heuristic We use heuristics to cope with the challenge of the impossibility of complete testing.
  • Key challenges of exploratory testing: Software testing poses several core challenges to the skilled practitioner / manager. The same challenges apply to exploratory testers.
  • Key challenges of exploratory testing
    • Learning (How do we get to know the program?)
    • Visibility (How to see below the surface?)
    • Control (How to set internal data values?)
    • Risk / selection (Which are the best tests to run?)
    • Execution (What’s the most efficient way to run the tests?)
    • Logistics (What environment is needed to support test execution?)
    • The oracle problem (How do we tell if a test result is correct?)
    • Reporting (How can we replicate a failure and report it effectively?)
    • Documentation (What test documentation do we need?)
    • Measurement (What metrics are appropriate?)
    • Stopping (How to decide when to stop testing?)
    • Training and Supervision (How to help testers become effective, and how to tell whether they are?)
  • Exploration and Learning
    • Explorers use a variety of tactics to learn about the product and its context (market, environment, etc.)
    • There are enormous individual and contextual differences
    • Here are a few examples.
      • Early coverage heuristics
      • Active reading of reference materials
      • Attacks
      • Failure mode lists
      • Project risk lists
      • Scenario development
      • Models
    • I’ve provided slides for each of these but won’t attempt to cover all of them in class. The ones we skip are for your later reference.
  • Exploration & Learning: Early coverage heuristics
    • We can use tests as a structure for learning about the basics of the program:
      • Test capability before reliability ( Start with function testing)
      • Test single features and single variables before combinations ( Domain testing)
      • Test visible / accessible features before hidden or esoteric ones ( Develop an appreciation of the designers’ view of the mainstream)
      • Test common use cases before unusual ones ( Develop an appreciation of how the product serves the target users’ basic needs.)
  • Exploration & Learning
    • Explorers use a variety of tactics to learn about the product and its context (market, environment, etc.)
    • There are enormous individual and contextual differences
    • Here are a few examples.
      • Early coverage heuristics
      • Active reading of reference materials
      • Attacks
      • Failure mode lists
      • Project risk lists
      • Scenario development
      • Models
  • Active Reading of Reference Materials
    • Active readers commonly
      • approach what they are reading with specific questions
      • summarize and reorganize what they read
      • describe what they’ve read to others
      • use a variety of other tactics to integrate what they are reading with what they already know or want to learn
    • You can gather information from explicit specifications and implicit ones (see the discussion of reference documents in our section on specification-based testing).
    • There are plenty of discussions / tutorials on how to do active reading on the net (just search google for “active reading.”)
    • The classic reference on active reading is Adler's "How to Read a Book"
    • We talk about using Bach's Test Strategy Model to structure active reading of specifications in the section on specification-based testing.
  • Active Reading
    • Exploratory testers, on approaching a product, often look for the following types of information:
      • Who are the stakeholders
      • What the designers and programmers intend
      • What customers and users expect
      • Interoperability requirements and other third-party technical expectations
      • Ways the product can be used
      • What benefits it provides including unintended benefits that it can be used to provide
      • What functions the product has
      • What the variables are (what can change, and how)
      • How functions or variables are related to each other or influence each other
      • How to navigate the product (where things are and how to get to them)
      • How products like this have failed in the past, or how this product is likely to fail
      • What oracles are available or can be readily constructed
      • How the product has evolved over time
      • What the testers' clients (e.g. the development team) want from you
      • What the testing staff are good at; how they have excelled in the past.
      • What tools or data are available that might help you test.
  • Active reading
    • As with the information we collect while running early tests, we use these documents to generate test ideas and testing notes (artifacts)
    • Our notes might include lists or descriptions of functions, variables, potential error conditions (and anything else we can list), notes on how things can interact with or constrain each other, possible user scenarios, traceability matrices, or anything else that might guide testing.
    • So how is this different from traditional test planning?
      • We create and execute tests as we read, rather than creating notes and trying the tests later. The tests help us understand the documents (understand-by-example) along with exposing bugs in the product.
      • The intent of the research is development of personal insight, not development of artifacts. The artifacts support the individual explorer, but may or may not be of any use to the next reader (if they are shared at all)
  • Contrasting Approaches In traditional scripted testing, the test planner designs and records the tests at one time. The tester executes the tests at some later time. In exploratory testing, the test planner is the tester. S/he designs and executes tests at the same time. The tester may or may not record the test. Product Tests Tests vs.
  • Contrasting Approaches Traditional scripted testing emphasizes accountability and repeatability. Exploratory testing emphasizes adaptability and learning . Product Tests Tests vs.
  • Ambiguity analysis
    • Ambiguities, confusion, and errors in the company-written documents, and in other source documents the company has relied on, point to opportunities for software error (and thus for tests). Whenever there is ambiguity, the author, the reader, the programmer and the tester might all have different opinions about the meaning, and thus about the intended or correct behavior of the product.
    • There are many sources of ambiguity in software design and development.
      • In wording or interpretation of specifications or standards
      • In expected response of the program to invalid or unusual input
      • In behavior of undocumented features
      • In conduct and standards of regulators / auditors
      • In customers’ interpretation of their needs and the needs of the users they represent
      • In definitions of compatibility among 3rd party products
        • Richard Bender teaches this well. If you can’t take his course, you can find notes based on his work in Rodney Wilson’s Software RX: Secrets of Engineering Quality Software
        • An interesting workbook: Cecile Spector, Saying One Thing, Meaning Another
  • Exploration & Learning
    • Explorers use a variety of tactics to learn about the product and its context (market, environment, etc.)
    • There are enormous individual and contextual differences
    • Here are a few examples.
      • Early coverage heuristics
      • Active reading of reference materials
      • Attacks
      • Failure mode lists
      • Project risk lists
      • Scenario development
      • Models
  • Attacks
    • An attack is a stereotyped class of tests, optimized around a specific type of error.
    • We studied Whittaker & Jorgenson's attacks, and a few additional ones, in the section on Risk-Based Testing. Elisabeth Hendrickson, in her course on Creative Software Testing, provides another superb set of attacks and testing heuristics.
    • Of course, attacks are very useful for finding bugs. This is why we accept them as attacks.
    • In addition, attacks are tester-education tools. Seeing which common mistakes a group of programmers do or don't make gives the tester insight into the level of experience, care and skill with which the program was developed.
  • Exploration & Learning
    • Explorers use a variety of tactics to learn about the product and its context (market, environment, etc.)
    • There are enormous individual and contextual differences
    • Here are a few examples.
      • Early coverage heuristics
      • Active reading of reference materials
      • Attacks
      • Failure mode lists
      • Project risk lists
      • Scenario development
      • Models
  • Failure mode lists / Risk catalogs / Bug taxonomies
    • We studied risk catalogs in the section on Risk-Based Testing. Bach's Test Strategy Model is a vehicle for generating and organizing the catalog.
    • In our experience, this is a powerful model for organizing exploratory testing. The more you use it, the more readily you can apply it to new programs and new contexts. The model helps you bring details to mind about
      • what you're testing (and how that might fail)
      • what quality attributes are important in this project (and so, what types of problems to look for most closely, in anything you test)
      • what aspects of the project might facilitate or constrain the testing you can do.
    • Thinking along all three of these lines together yields a lot of interesting test ideas that have the potential to spark interest and are can be implemented within the project's resources.
    • Exploratory testing is sometimes described as a search for previously-seen errors under new circumstances. This is only part of the story. If we add previously-imagined or just-imagined errors to the list, we get more of the story, but only one dimension (the product elements and how they fail). The test strategy model puts these potential failures into context (what failures are most important) (what does our project setting make easy or inexpensive to do), and from the context, we can prioritize potential and select among potential test techniques that we could use to hunt them.
  • Exploration & Learning
    • Explorers use a variety of tactics to learn about the product and its context (market, environment, etc.)
    • There are enormous individual and contextual differences
    • Here are a few examples.
      • Early coverage heuristics
      • Active reading of reference materials
      • Attacks
      • Failure mode lists
      • Project risk lists
      • Scenario development
      • Models
  • Project Risk Lists
    • The project risk lists point to challenges or high stakes issues that will affect the development of part of the product.
      • At LAWST 7, Microsoft's former Director of Software Testing told us about a study they made of testers who sustained an unusually high rate of bug finding. The common factor they was that the testers gossiped about the project with many of the programmers and other development staff. They learned about areas that the programmers were having trouble with, about late changes to the code, design thrashes, and so on. Given that knowledge, they targeted their tests toward the risks.
    • When you're doing this kind of testing, old scripts don't help much. These aren't the kinds of risks (see the list in our discussion of Risk-Based Testing) that you typically plan for at the start of the project.
    • Given the insight, you making guesses about how the specific challenge that you've become aware of might have caused weaknesses in the product's design or implementation. As you try different types of tests, and review failure patterns in the database associated with this section of the code, this author (or whatever the risk factor is), you pull together new ideas for focusing testing.
    • This is much like follow-up testing when you find a few bugs in an area. Once you see a few bugs, you spend more time, look more carefully, and either find more problems or build confidence in a conclusion that there are few or none left to find.
  • Exploration & Learning
    • Explorers use a variety of tactics to learn about the product and its context (market, environment, etc.)
    • There are enormous individual and contextual differences
    • Here are a few examples.
      • Early coverage heuristics
      • Active reading of reference materials
      • Attacks
      • Failure mode lists
      • Project risk lists
      • Scenario development
      • Models
  • Scenario testing
    • We can use any technique in the course of exploratory testing. As we get past the introductory tests, we need strategies for gaining a deeper knowledge of the program and applying that to tests. Scenario testing is a classic method for this.
    • The ideal scenario has several characteristics:
      • The test is based on a story about how the program is used, including information about the motivations of the people involved.
      • The story is motivating . A stakeholder with influence would push to fix a program that failed this test.
      • The story is credible . It not only could happen in the real world; stakeholders would believe that something like it probably will happen.
      • The story involves a complex use of the program or a complex environment or a complex set of data .
      • The test results are easy to evaluate . This is valuable for all tests, but is especially important for scenarios because they are complex.
  • Why use scenario tests?
    • Learn the product
    • Connect testing to documented requirements
    • Expose failures to deliver desired benefits
    • Explore expert use of the program
    • Make a bug report more motivating
    • Bring requirements-related issues to the surface, which might involve reopening old requirements discussions (with new data) or surfacing not-yet-identified requirements.
  • Scenarios
    • Designing scenario tests is much like doing a requirements analysis, but is not requirements analysis. They rely on similar information but use it differently.
      • The requirements analyst tries to foster agreement about the system to be built. The tester exploits disagreements to predict problems with the system.
      • The tester doesn’t have to reach conclusions or make recommendations about how the product should work. Her task is to expose credible concerns to the stakeholders.
      • The tester doesn’t have to make the product design tradeoffs. She exposes the consequences of those tradeoffs, especially unanticipated or more serious consequences than expected.
      • The tester doesn’t have to respect prior agreements. (Caution: testers who belabor the wrong issues lose credibility.)
      • The scenario tester’s work need not be exhaustive, just useful.
  • Sixteen ways to create good scenarios
    • Write life histories for objects in the system. How was the object created, what happens to it, how is it used or modified, what does it interact with, when is it destroyed or discarded?
    • List possible users, analyze their interests and objectives.
    • Consider disfavored users: how do they want to abuse your system?
    • List system events. How does the system handle them?
    • List special events. What accommodations does the system make for these?
    • List benefits and create end-to-end tasks to check them.
    • Look at the specific transactions that people try to complete, such as opening a bank account or sending a message. What are all the steps, data items, outputs, displays, etc.?
    • What forms do the users work with? Work with them (read, write, modify, etc.)
    • Interview users about famous challenges and failures of the old system.
    • Work alongside users to see how they work and what they do.
    • Read about what systems like this are supposed to do. Play with competing systems.
    • Study complaints about the predecessor to this system or its competitors.
    • Create a mock business. Treat it as real and process its data.
    • Try converting real-life data from a competing or predecessor application.
    • Look at the output that competing applications can create. How would you create these reports / objects / whatever in your application?
    • Look for sequences: People (or the system) typically do task X in an order. What are the most common orders (sequences) of subtasks in achieving X?
  • Exploration & Learning
    • Explorers use a variety of tactics to learn about the product and its context (market, environment, etc.)
    • There are enormous individual and contextual differences
    • Here are a few examples.
      • Early coverage heuristics
      • Active reading of reference materials
      • Attacks
      • Failure mode lists
      • Project risk lists
      • Scenario development
      • Models
  • Models
    • If you can develop a model of the product:
      • you can test the model as you develop it
      • you can draw implications from the model
    • For the explorer, the modeling effort is successful if it leads to interesting tests
      • in a reasonable time frame, or
      • that would be too hard to think of in other ways
    • Examples of models:
      • architecture diagram
      • state-based
      • dataflow
  • Example: Exploring Architecture Diagrams
    • Work from a high level design (map) of the system
      • pay primary attention to interfaces between components or groups of components. We’re looking for cracks that things might have slipped through
      • what can we do to screw things up as we trace the flow of data or the progress of a task through the system?
    • You can build the map in an architectural walkthrough
      • Invite several programmers and testers to a meeting. Present the programmers with use cases and have them draw a diagram showing the main components and the communication among them. For a while, the diagram will change significantly with each example. After a few hours, it will stabilize.
      • Take a picture of the diagram, blow it up, laminate it, and you can use dry erase markers to sketch your current focus.
      • Planning of testing from this diagram is often done jointly by several testers who understand different parts of the system.
  • Key challenges of exploratory testing
    • Software testing poses several core challenges to the skilled practitioner / manager. The same challenges apply to exploratory testers:
      • Learning (How do we get to know the program?)
      • Visibility (How to see below the surface?)
        • See the discussion in testability
      • Control (How to set internal data values?)
        • See the discussion in testability
      • Risk / selection (Which are the best tests to run?)
        • See the section on risk-based testing
      • Execution (What’s the most efficient way to run the tests?)
        • See also the discussions of automation and of paired exploratory testing
      • Logistics (What environment is needed to support test execution?)
      • The oracle problem (How do we tell if a test result is correct?)
      • Reporting (How can we replicate a failure and report it effectively?)
      • Documentation (What test documentation do we need?)
      • Measurement (What metrics are appropriate?)
      • Stopping (How to decide when to stop testing?)
      • Training and Supervision (How to help testers become effective, and how to tell whether they are?)
  • Some Heuristics of Exploratory Execution
    • You know all the techniques you need to do good exploratory testing.
    • The challenge is to develop your thinking about testing, so that you apply useful techniques at appropriate times.
      • Work with a charter.
      • Plunge in and quit.
      • Let yourself fork.
      • Use lateral thinking, but manage it.
      • Work in sessions. Use time boxes.
      • Generate ideas by asking questions.
      • Ask questions about causes.
      • Use the test design model as a structure to help you generate questions
      • If you have no questions, treat that as a thinking trigger
      • Keep these straight: Observations versus inference
      • Pay attention to coverage
  • Work with a Charter
    • Pick a task or a question or a theme that you think you can cover in a day or less. This is your charter. You might pick this from a more detailed project outline or a function list or any other list of project elements or product risks. Or you might generate the chartering idea in the moment.
    • Add some detail to the charter.
      • Don't work down to the level of individual tests (record test ideas if you have good ones in the moment that you want to remember, but generating individual ideas isn't your goal at this point)
      • A charter for a session might include what to test, what tools to use, what testing tactics to use, what risks are involved, what bugs to look for, what documents to examine, what outputs are desired, etc.
    • The charter will be the primary guide for one or a few exploratory testing sessions ( see below ).
  • Plunge In and Quit
    • This is a tool for overcoming fear of complexity.
    • Often, a testing problem isn’t as hard as it looks.
    • Sometimes it is as hard as it looks; you need to quit for a while to consider how to tackle the problem.
    • It may take several plunge & quit cycles to do the job.
    Whenever you are called upon to test something very complex or frightening, plunge in! After a little while, if you are very confused or find yourself stuck, quit!
  • Let Yourself Fork New test idea New test idea New test idea starting idea New test ideas occur continually during an ET session.
  • Use Lateral Thinking, but Manage It. but within sessions , step back a few times to take stock of your status within your charter and across sessions , review your set of charters against your project mission. Let yourself be distracted… Follow up new ideas within sessions and in setting charters for new sessions... because you never know what you’ll find…
  • Work in Sessions. Use Time Boxes.
    • As typical testing session is about 60 to 90 minutes. This is about as long as you can sustain full concentration.
      • At the end of the session, take a break. Make a few notes on what you've learned and what issues you think are still open. Report your bugs. Go to meetings, talk to people, etc.
    • A charter defines the goal of your work for one or a few sessions. At the end of each session, ask whether you've learned enough to have satisfied your charter. Even if you don't do all the tests that you imagined, when defining your charter, you might decide to stop because you feel that:
      • you're creating/executing strong tests and they're not exposing any problems, or
      • other tasks have higher priority
    • Allocate time for any work that you're going to do, and rethink your task (make a new time commitment) rather than simply overrunning your time allocation.
  • Develop test ideas and tactics by asking questions
    • Product
      • What is this product?
      • What can I control and observe?
      • What should I test?
    • Tests
      • What would constitute a diversified and practical test strategy?
      • How can I improve my understanding of how well or poorly this product works?
      • If there were an important problem here, how would I uncover it?
      • What document to load? Which button to push? What number to enter?
      • How powerful is this test?
      • What have I learned from this test that helps me perform powerful new tests?
      • What just happened? How do I examine that more closely?
    • Problems
      • What quality criteria matter?
      • What kinds of problems might I find in this product?
      • Is what I see, here, a problem? If so, why?
      • How important is this problem? Why should it be fixed?
  • Ask Questions About Causes
    • Context : What about this situation makes this risk exist at all?
    • Probability : Given that there is a risk, what influences the probability that a problem will occur?
      • Threat : What even or entity could precipate the problem (e.g. bad data)
      • Vulnerability : What is it about the product that allows the threat to cause a problem? (e.g. Fault or feature in the code)
      • Safeguards : What measures can we take (have we taken) to eliminate the vulnerability or threats (e.g. input filters)
    • Problem : What form would the problem / failure take?
      • Failing components : What part(s) of the product would fail?
      • Detection : What would be required to detect the problem if it occurs?
    • Impact : So, what if the problem occurs?
      • Damage : What is the loss? What are the downstream effects?
      • Repair : Can the damage be fixed? Who would fix it?
  • Use the Test Design Model as a Structure to Help You Generate Questions Test Techniques Project Environment Product Elements Quality Criteria Perceived Quality
  • Test Design Model
    • We've used this model to organize failure-modes and to analyze specifications.
    • We can also use it to guide test design overall.
      • If you're out of ideas about what to test,
        • Pick the focus of your tests by sampling your Product's Elements and considering which ones (or which interesting combinations) haven't yet been sufficiently tested, OR
        • Pick a theme for a set of tests by reviewing the Quality Attributes and considering which ones are important for the project that haven't yet been thoroughly tested. (Then pick the product elements that are interesting to test for this attribute, such as testing the product Preferences dialog (an element) for accessibility (a quality attribute).
      • If you're not sure what to test for
        • Review a failure mode catalog for examples of ways your product might fail.
      • When considering how to test ,
        • Look carefully at your Project Factors , for suggestions of factors that might make one approach easier or less expensive or in some other way more appropriate than another.
  • Keep Straight: Observation vs. Inference
    • Observation and inference are easily confused.
    • We don't have direct access to our sensory data.
      • Even the simplest perceptions are selective interpretations of the physiological events.
        • We miss things that occur right in front of us. See http://www.apa.org/monitor/apr01/blindness.html on inattentional blindness. It is easy to miss bugs that occur in front of your eyes.
        • Most of what we sense is lost (fades out of short term memory) within a few seconds.
        • Some things you think you see in one instance may be confused with memories of other things you saw at other times.
    • It’s easy to think you “saw” a thing when in fact you merely inferred that you must have seen it.
    What can you do about it?
  • Keep Straight: Observation vs. Inference
    • Accept that we’re all fallible, but that we can learn to be better observers by learning from mistakes.
    • Pay special attention to incidents where someone notices something you could have noticed, but did not.
    • Don’t strongly commit to a belief about any important evidence you’ve seen only once.
    • Whenever you describe what you experienced, notice when you’re describing what you saw and heard, and when you are instead jumping to a conclusion about “what was really going on.”
    • Where feasible, look at things in more than one way, and collect more than one kind of information about what happened (such as repeated testing, paired testing, loggers and log files, or video cameras).
    A few suggestions
  • Pay Attention to Coverage
    • Testers with less expertise…
      • Think about coverage mostly in terms of what they can see.
      • Cover the product indiscriminately.
      • Avoid questions about the completeness of their testing.
      • Can’t reason about how much testing is enough.
    • Better testers are more likely to…
      • Think about coverage in many dimensions.
      • Maximize diversity of tests while focusing on areas of risk.
      • Invite questions about the completeness of their testing.
      • Lead discussions on what testing is needed.
  • Where Do These Heuristics Come From?
    • Notice problems.
    • Wonder why.
    • Notice a pattern or dynamic.
    • Put it into words.
    • Try using it; try talking about it.
    • See if it sticks.
  • Some of the Types of Heuristics
    • Trigger Heuristics: Ideas associated with an event or condition that help you recognize when it may be time to take an action or think a particular way. Like an alarm clock for your mind. Example: No Questions
    • Subtitle Heuristics: Help you reframe an idea so you can see alternatives and bring out assumptions during a conversation.
    • Guideword Heuristics: Words or labels that help you access the full spectrum of your knowledge and experience as you analyze something.
    • Heuristic Model: A representation of an idea, object or system that helps you explore, understand, or control it.
    • Heuristic Procedure or Rule: A plan of action that may help solve a class of problems.
  • A Trigger: You have No Questions If you don’t have any issues or concerns about product testability or test environment, that itself may be a critical issue. When you are uncautious, uncritical, or uncurious,that’s when you are most likely to let important problems slip by. Don’t fall asleep! If you run out of questions about a specific part of the product, use the test model, or a failure mode catalog, or a risk list to look for new questions. If that doesn't work, switch areas. If you've run out of ideas but think there's more work to do in that part of the product, trade tasks with someone else. Have them try it. If you find yourself without any questions, ask yourself “ Why don’t I have any questions?”
  • Subtitle Heuristic Example: No One Would Do That
    • Subtitle Heuristics: Help you reframe an idea so you can see alternatives and bring out assumptions during a conversation.
      • When you hear:
      • Instead of accepting this wishful skepticism about risk, think:
      • And then think,
    No user would do that They mean, "No user that I can think of , who I like , would do that on purpose." If no one thinks there are risks here, there are probably a lot of bugs.
  • Heuristic Model Example:
    • The test strategy model is an collection of guideword heuristics, integrated into a model of risk.
    Heuristic Procedure or Rule:
    • Test techniques, especially the testing attacks, are heuristic procedures.
  • Guidework Heuristic Example: "Buggy"
    • Guideword Heuristics: Words or labels that help you access the full spectrum of your knowledge and experience as you analyze something.
    • "Buggy" is a guideword heuristic intended to remind us of Myers' famous assertion,
      • "The number of bugs remaining in a product is proportional to the number already found."
    • It's a useful idea, but . . .
      • What does it mean?
      • Why does it work?
      • How or when might it be wrong?
    Using this heuristic requires insight.
  • Guideword Heuristics for Analyzing a Diagram Browser App server Web server Database layer
    • Error handling
    • Error handling
    • Limitations
    • Challenging
    • Data structures
    • Conditional behavior
    • Pathological
    • Status Communication
    • Contents / Algorithms
    • Complex
    • Timing / Sequencing
    • Timing / Sequencing
    • Critical
    • Incorrect
    • Incorrect
    • Popular
    • Extra / Forking
    • Extra / Interfering
    • Simplest
    • Missing / Drop-out
    • Missing / Drop-out
    Paths Lines Boxes
  • Key challenges of exploratory testing
    • Software testing poses several core challenges to the skilled practitioner / manager. The same challenges apply to exploratory testers:
      • Learning (How do we get to know the program?)
      • Visibility (How to see below the surface?)
      • Control (How to set internal data values?)
      • Risk / selection (Which are the best tests to run?)
      • Execution (What’s the most efficient way to run the tests?)
      • Logistics (What environment is needed to support test execution?)
        • We don't cover logistics in this course
      • The oracle problem (How do we tell if a test result is correct?)
        • See the course sections on oracles, high volume automation and architectures of test automation
      • Reporting (How can we replicate a failure and report it effectively?)
        • See the sections on bug reporting
      • Documentation (What test documentation do we need?)
      • Measurement (What metrics are appropriate?)
      • Stopping (How to decide when to stop testing?)
      • Training and Supervision (How to help testers become effective, and how to tell whether they are?)
  • Test Documentation & Exploratory Testing
    • Exploratory testing is a "lightweight process" -- in general, less documentation is better.
    • "Less" doesn't necessarily mean "none," but it does mean "no more than you're sure you need." Think of the documentation that you create during exploratory testing as tester's notes.
    • Taking extensive notes about what you're doing, while you're testing:
      • takes a lot of time
      • focuses you on procedural-level thinking (the details of what you've done) rather than tactical or strategic thinking.
      • focuses you on what you did rather than on what you should do next
      • doesn't necessarily provide much benefit. After all, exploratory testers aren't trying to set up regression tests--they create new tests as they learn more, rather than repeating old tests.
    • But you might intentionally explore the product for the purpose of creating notes, such as function lists, lists of variables, and so on.
    • Use your judgment. Create what will be useful to guide your testing, or to help you supervise someone else or report your status to someone else, but look for ways to minimize interference with your workflow. Don't create documents for archival purposes unless there are clear, affirmative requirements for them.
  • Test Documentation & Exploratory Testing
    • But don't we need to take extensive notes while testing in order to know what we've been doing, so we can reproduce bugs when we find them?
    • Sometimes, when you find a bug, you won't be able to reproduce it.
      • In my (Kaner's) experience, most of these times, it is because the tester was not paying attention to what turns out to be the critical variable. (See our discussion of irreproducible bugs in the discussion of bug reporting.)
      • More rarely, the tester forgets what she was doing and can't recreate the steps. This is more of a problem with inexperienced testers, who punch keys without awareness of the reasoning behind the test they're running at the moment.
        • Is the better solution to spend hours writing procedural details so that, on rare occasions when they are needed, they are available?
        • Or is the better solution to spend more time thinking about why you're doing the test that you're doing, before and while you do it? (This makes it more likely that you'll run much the same test if you try to repeat it.)
      • Automatic logging tools can help you solve part of this problem.
  • What if You Need Extensive Documentation?
    • Concerns
      • We have a long-life product and many versions, and we want a good corporate memory of key tests and techniques. Corporate memory is at risk because of the lack of documentation.
      • The regulators would excommunicate us. The lawyers would massacre us. The auditors would reject us.
      • We have specific tests that should be rerun regularly.
    • Suggestions
      • In the face of concerns like these, use a balanced approach, not purely exploratory.
      • You don't have to remember every test. You only have to remember the tests that your company will want to repeat, or produce for later inspection.
  • Key challenges of exploratory testing
    • Software testing poses several core challenges to the skilled practitioner / manager. The same challenges apply to exploratory testers:
      • Learning (How do we get to know the program?)
      • Visibility (How to see below the surface?)
      • Control (How to set internal data values?)
      • Risk / selection (Which are the best tests to run?)
      • Execution (What’s the most efficient way to run the tests?)
      • Logistics (What environment is needed to support test execution?)
      • The oracle problem (How do we tell if a test result is correct?)
      • Reporting (How can we replicate a failure and report it effectively?)
      • Documentation (What test documentation do we need?)
      • Measurement (What metrics are appropriate?)
        • See the sections on test-related measurement
      • Stopping (How to decide when to stop testing?)
      • Training and Supervision (How to help testers become effective, and how to tell whether they are?)
  • Training and Supervision
    • Concerns
      • ET works well for expert testers, but we don’t have any.
    • Replies
      • Detailed test procedures do not solve that problem, they merely obscure it, like perfume on a rotten egg.
      • Our goal as test managers should be to develop skilled testers so that this problem disappears, over time.
      • Since ET requires test design skill in some measure, ET management must constrain the testing problem to fit the level and type of test design skill possessed by the tester.
      • We constrain the testing problem by personally supervising the testers, and making use of concise documentation, NOT by using detailed test scripts. Humans make poor robots.
  • Training and Supervision
    • In exploratory testing, every tester is a test planner and test designer.
    • To train the explorer:
      • Start with a small task, testing a small area of the product or testing in a narrow way or for a particular kind of bug. Let him do the work (for an hour or a few hours). Talk with him about what he did, have him show you his tests, comment on them, suggest alternatives, and then try again.
      • As the tester's skill improves, you can assign longer and more complicated tasks. It's like building up a credit rating. The tester earns more over time. The tester who doesn't earn more should be reassigned to some other type of work.
      • As you pick small tasks, pick some that give good practice in specific techniques. The more techniques the tester understands and can do, the more effective he'll be as an explorer.
    • Concerns
      • How do I tell the difference between bluffing and exploratory testing?
      • If I send a scout and he comes back without finding anything, how do I know he didn’t just go to sleep behind some tree?
    • Replies
      • You never know for sure– just as you don’t know if a tester truly followed a test procedure.
      • It’s about reputation and relationships. Managing testers is like managing executives, not factory workers.
      • Give novice testers short leashes; better testers long leashes. An expert tester may not need a leash at all.
      • Work closely with your testers, and these problems go away.
    Training and Supervision
  • Better Thinking
    • Conjecture and Refutation : reasoning without certainty.
    • Abductive Inference : finding the best explanation among alternatives.
    • Lateral Thinking: the art of being distractible.
    • Forward-backward thinking : connecting your observations to your imagination.
    • Heuristics: applying helpful problem-solving short cuts.
    • De-biasing: managing unhelpful short cuts.
    • Pairing: two testers, one computer.
    • Study other fields. Example: Information Theory.