ET Workshop v. 1.20 - Skills and Techniques
Upcoming SlideShare
Loading in...5
×
 

ET Workshop v. 1.20 - Skills and Techniques

on

  • 679 views

 

Statistics

Views

Total Views
679
Views on SlideShare
679
Embed Views
0

Actions

Likes
0
Downloads
11
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Characterizing the Styles: What skills and knowledge does the style require or assume? Programming / debugging Knowledge of applications og this type and how they fail Knowledge of the use of applications of this type Deep knowledge of the software under test Knowledge of the system components (h/w or s/w or network) that are the context of the application Long experience with software development projects and their typical problems Requirement analysis or probleme decomposition techniques Mathematical, probability, formal modelling techniques Query: are any of these tecnhiques appropriate to novices? Can we train novices in exploration?
  • Hunch – gut feeling (intuisjon), hunch (IDEA), noun [C] an idea which is based on feeling and for which there is no proof I had a hunch that you'd be here. [+ that clause] Sometimes you have to be prepared to act on/follow/play a hunch. Few people are willing to stake their reputation on a hunch.
  • Hunch – gut feeling (intuisjon), Random: People who don’t understand exploratory testing describe it as ”random testing”. They use phrases like ”random tests”, ”monkey tests”, ”dumb user tests”. This is probably the most common characterization of exploratory testing This describe very little of the type of testing actually done by skilled exploratory testers. Questioning: Questioning is the essence of exploration. The tester who constantly asks good questions can Avoid blind spots Quickly think of new test cases Constantly very our approaches and targets Discover holes in specifications and product descriptions These notes collect my material on questioning into one section, presented later as par of test planning. Similar to Previous Errors James Bach once described exploratory testers as Mental pack rats who horde memories of every bug they’ve ever seen The way they come up with cool new tests is by analogy: Gee, I saw a program kind of like this before, and it had a bug like this. How could I test this program to see if it has the sam old bug? A more formal variation: Create a potential bugs lists, like the appendix A of ”Testing Computer Software” Another related type of analogy: Sample from another product’s test docs. Follow up Gossip and Predictions Sources of gossip: Directly from programmers, about their own progress or about the progress / pain of their colleagues From attending code reviews (for example, at some reviews, the question is specifically asked in each review meetitn, “What do you think is the biggest risk in this code?”) From other testsers, writers, marketers, etc. Follow up Recent Changes Given a current change: Tests of the feature / change itself Tests of features that interact with this one Tests of data that are related to this feature or data set Tests of scenarios that use this feature in complex ways
  • We usually think of modeling in terms of preparation for formal testing, but there is no conflict between modeling and exploration. Both types of tests start from models. The difference is that in exploratory testing, our emphasis is on execution (try it now ) and learning from the results of execution rather than on documentation and preparation for later execution. Formal testing / Exploratory Testing: both starts from MODELS!
  • Interfaces Architectural walkthrough
  • Example: How could we produce a paper jam (as a result of defective firmware, rather than as a result of jamming the paper?) The laser printer feeds a page of paper at a steady pace. Suppose that after feeding, the system reads a sensor to see if there is anything left in the paper path. A failure world result if something was wrong with the hardware or software controlling or interpreting the paper feeding (roller, choice of paper origin, paper tray), paper size, clock, or sensor. BBS POS problems (Finansavisen 15.4.02): Shop enteres NOK 700 pm cash register  transfers to POS Shop press ”payment” User press PIN + ok User confirm the amount (has probably already changed?) Posted on account: NOK 1700 How can that happen? BBS do not know. I did this a lot at Statoil – e.g. Multi user problems: What if two users update the same record? How can I make the ”optimistic locking” fail? (Optimistic locking; locking implemented by each programmer mostly stored procedures – did NOT use Oracle locking).
  • Java Beans (J2EE) – states: Created (instantiated) Modified Stored Deleted (die) The CRBA-project: while modifying one bean  another one was created Press “Back” Lost track of which one of the beans you were working on
  • Improvisational Testing The originating model here is of the test effort, not (explicitly) of the software. Another approach to ad hoc testing is to treat it as improvisation on a theme, not unlike jazz improvisation in the musical world. For example, testers often start with a Test Design that systematically walks through all the cases to be covered. Similarly, jazz musicians of the start with a musical score or “lead sheet” for the tunes on which they intend to improvise. In this version of the ad hoc approach, the tester is encouraged to take off on tangents from the original Test Design whenever it seems worthwhile. In other words, the tester uses the test design but invents variations. This approach combines the strengths of both structured and unstructured testing: the feature is tested as specified in the test design, but several variations and tangents are also tested. On this basis, we expect that the improvisational approach will yield improved coverage. Improvisational tehcniques are also useful when verifying that defects have been fixed. Rather than simply verifying that the stepts to reproduce the defect no longer result in the error, the improvisational tester can test more deeply “around” the fix, ensuring that the fix is robust in a more general sense. Johnson & Arguss, *Ad Hoc Software Testing: Exploring the controversy of Unstructured Testing StarWEST ‘98
  • c.Rel: Product development – change requests; how to test change requests? How much need to be retested? Did not have function model or data model with relations to physical model.
  • BY modelling specifications, drawing finite state diagrams of what we thought was important about the specs, or just looking at the application or the API, we can find orders of magnitude more bugs than traditional tests. Example of file transfer in PC-Bank: The user use a PC-application to select data to be retrieved from the host in a file transfer. The PC would dial in to the host, submit the request and wait for file to be produced and then download the file. The number of error situations are horrendous: the request could be incomplete or contain errors, the host could be busy so the file would take “forever” to produce, the produced file could be empty or enormously big etc. Without a state transition diagram the communication with the programmer become impossible, he did not understand what to implement, how to do it and he never got it right until they defined the diagram. NORA-prosjektet I Norges Bank: Benyttet tilstandadiagrammer for å dokumentere dialogen mellom (ny) frontendløsning og kjernesystemer.
  • Output constraints; large files, disk full, no memory Statoil spørreundersøkelse: flerspråklig. Dataene som ble lagret hadde en record pr. Språk – hele datastrukturen var derfor nokså stor. Hvis man fylt ut deler av spørre undersøkelsen og lagret, deretter endret språk og så fortsatte igjen, ville datastrukturen inneholde ALLE svarene på alle språk (disse kunne selvfølgelig avvike fra hverandre). Whittaker: That is what software does: input, store, compute and output! Programvare feiler frodi programmereren IKKE har satt opp gode nok begrensninger på input, lagring, beregninger og output! Sjekker ikke ugyldige verdier godt nok! Testing Strategy Study the inputs – Run attacks to break input constraints • Study the outputs – Run attacks on output constraints • Study the way software stores data – Run attacks designed to corrupt internal data • Study how the software does computation – Run attacks to force errant computation Whittaker: Input Store Compute Output
  • The most unfortunate result of a lack of documentation is to leave testers in the dark. Instead of being able to read about a program’s behavior, they must spend time learning the application by using it. Too often, this leaves inexperienced testers in the “bang on the keyboard” mode, hoping to break the system. What is needed is an approach that includes this exploration process, but also leads to early and useful bug detection. We base our approach on what we call a “failure model.” A failure model is a generalized description of the kinds of failures that we expect to find in the software. How do we determine the types of failures that we expect? We study the kinds of mistakes that programmers typically make. One such study resulted in our paper, “Why Software Fails” presented at STAR EAST, 1999 and published in ACM SE Notes [1] . In “Why Software Fails” we determined that after software had been tested and released there still existed a major class of defects summarized by failure to properly constraint the software. In fact, there were four classes of constraint failures: 1) Input, 2) Output, 3) Storage, and 4) Computation. This should not be surprising since these are the four things that programs generally do. For details, please refer to our paper on the subject. For our purposes here, let us assume that all software may suffer from these four classes of failures and use this as a model to design attacks against the software. What this means is that we will assume that failures of these types exist and, while we are exploring the functionality of the software we are testing, we will also check for these types of failures. For each category of failure, there are several attacks, and, to date, we have found eighteen different attacks. We are very interested in hearing from you should you find yet another category of attack. Inputs need not be applied in an ad hoc manner. Instead, every input applied should either be part of an effective attack or it should be part of exploring the functionality in order to plan an attack. In the latter case, endeavor to act like a user to the best of your knowledge of how users will use the application. In other words, try to apply inputs that force the application to get real work done . When testing a word processor, create, format, and edit a document. Put yourself in the user’s shoes and try to get done what the user will be doing with the application. Once the functionality is understand sufficiently, it’s time to get nasty. Here’s our advice on how to accomplish this. Study the kind of mistakes Programmers do  Design attacks against the software Explore the functionality to plan an attack  act like a user It’s time to get nasty
  • First attack: Apply inputs that force all the error messages to occur I like to start with erroneous conditions to get them out of the way. This is the low-hanging fruit of bugs . Pick it first and then move on to more sophisticated attacks. The idea is to make sure that the software doesn’t try to process bad data. Enter values that are too big, too small, too long, too short, out of the acceptable range, or of the wrong data type. When an error message occurs, determine the boundary conditions of the error. We have found an error in some applications that mark a file name too lengthy; however, at just the right length, the file name is accepted but the appended file name extent is truncated inappropriately. Second attack: Apply inputs that force the software to establish default values This is a wonderful attack because often it means doing nothing except clicking “OK” and watching the application die. Why would something so simple constitute an effective attack? Actually, just because the tester does nothing does not mean the software doesn’t have to do work. In fact, establishing defaults is a fairly intricate programming task. Developers have to make sure that variables are initialized before a loop is entered or before a function call is made. If this doesn’t happen, then an internal variable might be used without being initialized. The result is often catastrophic. Third attack: Explore allowable character sets and data types Some input values are simply problematic, particularly when you consider that special characters like $, %, #, quotation marks and so forth have special meaning in many programming languages and often require special handling when they are read as input. If the developer failed to consider this situation, then these inputs may cause the program to fail when they are encountered. Sometimes historic reserved words may cause a problem. Fourth attack: Overflow input buffers The idea here is to enter long strings in order to overflow input buffers. This is a favorite attack by hackers because, sometimes, the application is still executing a process after it crashes. If a hacker attaches an executable string to the end of the long input string, the process may execute it. A buffer overflow in Word 2000 is one such exploitable bug. The bug is in the Find/Replace feature and is shown below. It is interesting to note that the “Find” field is properly constrained but the “Replace” field is not. Fifth attack: Find inputs that may interact and test various combinations of their values Up to now, we have only dealt with attacks that exploit a single input entry point into the software. In other words, we have been picking an input location and poking it until the software breaks. This next attack deals with multiple inputs that are processed together or that influence one another. For example, an API that can be called with two parameters requires selection of values for one parameter based on the value chosen for the other parameter. It is often the combination of values that was programmed incorrectly because of the complexity of the logic involved in coding the solution. As an example, try the following with Word 2000. Choose the Insert option from the Table menu and experiment with the allowable values for the number of columns and the number of rows. You will soon realize that these input fields cannot be overflowed and that the maximum number of columns is 63 and the maximum number of rows is 32767. This is a good example of input value dependence. If you enter small numbers for both then Word handles this just fine. A large number for one and a small number for the other is also fine. But if you enter the maximum values of 63 and anything above 32000 for the number of rows, the application hangs because it overwhelms the machine’s CPU cycles. Sixth attack: Repeat the same input or series of inputs numerous times Repetition often has the effect of gobbling resources and stressing an application’s stored data space, not to mention uncovering undesirable side-effects. Unfortunately, most applications are unaware of their own space and time limitations and many developers like to assume that plenty of resources are always available. An example of this can be found in Word’s equation editor that seems to be unaware that it can only handle 10 levels of nested brackets. Indeed, after the 10th pair of brackets, the equation disappears. CRBA: Input different ASCII values  incorrect calculation of field length Input of ´ made the Oracle SQL fail Ved ugyldige verdier: trykket flere ganger. Etter 3dje gangen  ny feilmelding fra Oracle! (ikke applik) Prøv! Prøv! Prøv!
  • Output Constraint Attacks Applying inputs is fairly straightforward and, unfortunately, many testers equate testing with applying a large number of different inputs and input combinations. Our research indicates, however, that many bugs are simply too difficult to find by concentrating on inputs alone. Instead, we take the more difficult approach of beginning with software outputs and work our way back to causal inputs. The next series of attacks will require us to identify interesting outputs and figure out which inputs are capable of driving the application to generate those outputs. Attacks 7-11 give us insight into how we go about selecting which outputs to concentrate on. Seventh attack: Force different outputs to be generated for each input It is often the case that a single input can generate any number of outputs, depending on the context under which it is applied. For example, if we must test a telephone switch, one input that must be tested is the ability of the switch to correctly process the input “the user picks up the phone.” Since there are two major outputs that the switch will generate when this input is applied, we must test them both. Consider first the case that the phone is idle and the user picks up the receiver: the switch will generate a dial-tone output and send it to the user’s phone. Now consider the case in which the phone is ringing: the switch will connect our user with the other subscriber who placed the call. Thus, we have tested the two major outputs (or behaviors) associated with the user picking up the telephone receiver. Identifying all possible outputs for the most important or frequently used inputs is an important exercise. Ensuring that testing covers each of these outputs can be hard work, but will pay off by helping us find important bugs that will irritate our users. Eighth attack: Force invalid outputs to be generated This is a very effective attack for testers who really understand their problem domain. For example, if you are testing a calculator and understand that some functions have a restricted range for their result, trying to find input value combinations that force that result is a worthwhile effort. However, if you do not understand mathematics, it is likely that such an endeavor will be a waste of time—you might even interpret an incorrect result as correct. One of our favorite bugs falls into this category. A Y2K-related bug in Windows NT (which was fixed in service pack 5) actually allowed the system to display the date February 29, 2001—an invalid output because the year 2001 is not a leap year. In this case, a tester unfamiliar with the leap year rule would undoubtedly have missed this bug. Ninth attack: Force output size or dimension to change Another useful attack along these same lines is forcing a complex output to be generated and then changing some property of the output. The property that is often the most convenient for user interface testing is output size, i.e., force display areas to be recomputed by changing the length of inputs and input strings. A good conceptual example is setting a clock to 9:59 and watching it roll over to 10:00. In the first case the display area is 4 characters long and in the second it is 5. Going the other way, we establish 12:59 (5 characters) and then watch the text shrink to 1:00 (4 characters). Too often developers write code to work with the initial case of a blank display area and are often disappointed when the display area already has data in it and new data of different size is used to replace it. Tenth attack: Force output to exceed the size of its destination This is another attack based on outputs that is very similar to the previous attack. However, instead of looking for ways to cause the area inside the display to get corrupted, we are going to concentrate on the area outside the display. This time we are going to do things we hope don’t require recalculation of the display boundaries but simply overflow them. Considering PowerPoint again, we can draw a textbox and fill it with a superscripted string. Then, changing the size of the superscript to a large font causes the top of the exponent to be truncated. Eleventh attack: Force the screen to refresh Refreshing the screen or window as a result of a user applying some input is a major problem for users of modern windows-based GUIs. It is an even bigger problem for developers: refresh too often and you slow down your application, failing to refresh causes anything from minor annoyances (i.e., requiring the user to force refresh) to major bugs (preventing the user from getting work done). The general idea in searching for refresh problems is to add, delete and move objects around on the screen. This causes the background object to redisplay, and if it doesn’t do it properly and in a timely fashion, you have just found the classic refresh bug. It is a good idea to try varying the distance you move an object from its original location. Move it a little, then move it a lot; move it once or twice, then move it a dozen times. Continuing with the large superscript example from above, try moving it around on the screen a little at a time. Note the nasty refresh problem shown below. CRBA: For lange output felter endret størrelsen på tabellen som ble vist I vinduet (attack 10). Rollback; noen funksjoner inneholdt Auto Commit som committet ALT (slik at bruker-styrt commit ble overstyrt). Prøv! Prøv!
  • Storage Constraint Attacks Data is the lifeblood of software; if you manage to corrupt it, the software will eventually have to use thebad data and what happens then may not be pretty. It is worthwhile to understand how and where data values are established. Essentially, data is stored either by reading input and then storing it internally or by storing the result of some internal computation. By supplying input and forcing computation, we enable data flow through the application under test. The attacks on data follow this simple fact as outlined in attacks 12-14. However, without access to the source code, many testers do not bother to consider these attacks. We believe, though, that useful testing can be done even though specifics of the data implementation are hidden. We like to tell our students to practice “looking through the interface.” In other words, take note of what data is being stored while the software system is in use. If data is entered on one screen and visible on another, then it is being stored. Information that is available at any time is being stored. Some data is easy to see. A table structure in a word processor is one such example in which not only the data but the general storage mechanism is displayed on the screen. Some data is hidden behind the interface and requires analysis to discover its properties. Once the nature of the data being stored is understood, try to put yourself in the position of the programmer and think of the possible data structures that might be used to store such data. The more that programming and data structures are understood, the easier it will be to execute the following attacks. The more completely you understand the data you are testing, the more successful the attacks will be at finding bugs. Twelfth attack: Apply inputs using a variety of initial conditions Inputs are often applicable in a variety of circumstances. Saving a file, for example, can be performed when changes have been made, and it can also be performed when no changes have been made. Testers are wise to apply each input in a number of different circumstances to account for the many such interactions that users will encounter when using the application. Thirteenth attack: Force a data structure to store too many/too few values There is an upper limit on the size of all data structures. Some data structures can grow to fill the capacity of machine memory or hard disk space and others have a fixed upper limit. For example, a running monthly sales average might be stored in an array bounded at 12 or fewer entries, one for each month of the year. If you can detect the limits on a data structure, try to force too many values into the structure. If the number is particularly large, the developer may have been sloppy and not programmed an error case for overflow. Special attention should be paid to structures whose limits fall on the boundary of data types 255, 1023, 32767 and so on. Such limits are often imposed simply by declaration of the structure’s size and very often lack an overflow error case. Underflow is also a possibility and should be tested as well. This is an easy case, requiring only that we delete one more element than we add. Try deleting when the structure is empty, then try adding an element and deleting two elements and so on. Give up if the application handles 3 or 4 such attempts. Fourteenth attack: Investigate alternate ways to modify internal data constraints The phrase “the right hand knoweth not what the left hand doeth” describes this class of bugs. The idea is simple and developers leave themselves wide open to this attack; in most programs there are lots of ways to do almost anything. What this means to testers is that the same function can be invoked from numerous entry points, each of which must ensure that the initial conditions of the function are met. An excellent example of this is the crashing bug one of our students found in PowerPoint, regarding the size of a tabular data structure. The act of creating the table is constrained to 25  25 as the maximum size. However, one can create such a table, then add rows and columns to it from another location in the program—crashing the application. The right hand knew better than to allow a 26  26 table but the left hand wasn’t aware of the rule. CRBA: No validation of max input length in key fields  save would fail; error not detected because of faulty exception handling Statoil spørreundersøkelse: flerspråklig. Dataene som ble lagret hadde en record pr. Språk – hele datastrukturen var derfor nokså stor. Hvis man fylt ut deler av spørre undersøkelsen og lagret, deretter endret språk og så fortsatte igjen, ville datastrukturen inneholde ALLE svarene på alle språk (disse kunne selvfølgelig avvike fra hverandre). Prøv!
  • Computation Constraint Attacks Computation, using both data that is stored internally and data that is received as input from users, is one of the most fundamental tasks that software performs, and it presents challenging testing problems. Like data, computation cannot be directly seen; it is hidden behind the user interface and much of the details associated with a particular computation must be surmised without benefit of the source. Computation is also everywhere in a software application. Computation is performed in assignment statements that pervade all code, no matter its functionality. Software computes when it loops, computes when it branches and computes when its features interact with its stored data. The next three attacks will, hopefully, put some perspective on this difficult testing endeavor. Fifteenth attack: Experiment with invalid operand and operator combinations This class of attacks requires investigation of the data type and allowable values associated with operands in one or more internal computations. If one has access to the source, this information is obtainable. Otherwise, testers must do their best at determining what computation is taking place and what type of data is being used. Sometimes inputs or stored data are well within the legal boundaries but are illegal for some types of computation. Division by zero is a good example. Zero is a valid integer, but it is invalid as the denominator of a division computation. Computations that have more than one operand are subject to not only the above attack but also to potential operand conflict. For example, both character and number types can be combined with the ‘+’ operator in many programming languages. In the former case, adding characters causes them to be concatenated, and in the latter case, integer arithmetic is performed. However, forcing a software system to add a character to a number (conflicting operands) might cause a failure. Sixteenth attack: Force a function to call itself recursively Functions often call other functions to get work done. Sometimes, functions call themselves. This is called recursion and it is a powerful alternative for iterative loops that developers often employ. Both loops and recursion can be problematic if the number of times they execute is not limited to a finite number. The “infinite loop” is a common programming error; however, such things are generally well covered in unit testing. As a system tester, it has been many years since I saw such a problem remain unfixed long enough to be seen. Recursion, however, is another story altogether. Modern software applications offer ways for objects to reference themselves, which, in turn, offers testers new ways to break them. The hyperlink is the most common analogy. Imagine a web page that has a link to itself. This is the general idea of recursion. Now imagine a web page with a script that automatically executes when the page is displayed. Suppose that script reloads its host page, which will execute the script, which reloads the host page, which executes the script… You get the idea. This shows the danger of recursion. If it is implemented improperly, it will quickly overwhelm the resources of the machine and eventually generate a heap overflow. Seventeenth attack: Force computation results to be too large or too small The next class of computation attacks is aimed at overflowing and underflowing data objects that store computation results. Even simple computations like y=x+1 are problematic around boundary values. If both x and y are signed 2 byte integers and x has the value 32767, this computation will fail because the result will overflow its storage; the result exceeds the range of acceptable signed 2 byte integers. The same thing goes at the negative end of a data type. y=x-1 will fail if we can assign x the value -32768. Eighteenth attack: Find features that share data or interact poorly The last attack category discussed in this paper is perhaps the granddaddy of them all and the one that separates testing novices from the pros: feature interaction. The problem here is nothing new: different application features share the same data space and the interaction of the two features causes the application to fail. Features that share data and could interpret that data in conflicting ways provoke an open question: How do you test feature interaction? Right now we are stuck with trial and error. So this example must suffice for now. This example shows an unexpected result when combining footnotes and dual columns on a single page in Word 2000. The problem is that Word computes the page width of a footnote from the reference point of the note. Thus, if one has two footnotes on the same page, one referenced from a dual column location and one from a single column location, the single column footnote pushes the dual column footnote to the next page. In addition, any text between the note’s reference point and the bottom of the page is pushed to the next page. The following screen shots illustrate the problem vividly. Where is the second column of text? (It is on the next page along with the footnote.) Can you live with the document looking like this? You must, unless you find a workaround (which means time spent away from preparing your document). Other examples in Word 2000 include problems with widow/orphan control on paragraphs with embedded pictures and resizing text boxes that have been grouped with other types of objects. RATS: Oracle output displayed ~ Usikker årsak.  Oracle displayed ~
  • Remember Bug Advocacy
  • Use Case Design; Main / Normal flow and ALTERNATE flows. Normal testing techniques; one or more test cases for each flow (control coverage). Flows can be “sorted” based on risk which will again influence number of test cases.
  • Pos. / neg. tests for each FLOW Boundary value / equivalence class test of business rules Sequencing of events or actions Performance
  • Realistic Klar Kompl. test
  • NORMAL and KILLER soap operas. As Soap Operas have evolved, Hans Buwalda distinguishes between normal soap operas , which combine many issues based on user requirements – typically derived from meetings with the user community and probably don’t exaggerate beyond normal use – and killer soap operas , which combine and exaggerateto produce extreme cases .
  • Flight reservation system. John Musa – Intro to his book, Reliable Software Engineering, says that you should use different values within an equivalence class, For example, if You are testing a light reservation system for two US cities, vary the cities. They shouldn’t matter, but sometimes they do.
  • Brother HL II laser printer (personal printer): - hvis jeg startet utskriftsjobben FØR jeg slo på skriveren  papir-krasj på første ark!
  • My home office PC runs on Windows 98. At Start-up: leave it at “logon” for a “long time”  some services will not be started  MS Outlook wil crash (will never surface after start-up – but if I try to start a second copy, the same thing happen again) If I start IE6, it will stop with a blank screen, but if I press “Referesh” it will restart correctly (I can now start MS Outlook without problems – sometimes!)
  • Do online insert during a heavy batch insert.
  • Danny Faught: Wrote a program to change one bit in a file of the operating system.  The operating system crashed
  • Harsh configuration; f.eks. Redusert minne, diskplass etc.
  • Basic strategy for dealing with new code: Start with mainstream-level test. Test the program with easy-to-pass values that will be taken as serious issues if the program fails. Test broadly, rather than deeply. Check out all parts of the program quickly before focusing. Look for more powerful tests. Once the program can survive the easy test, put on your thinking cap and look systematically for challenges. Pick boundary conditions. There will be too many good tests. You need a strategy for picking and choosing. Do some exploratory testing. Run new tests every week, from the first week to the last week of the project.
  • List of cases Looking for more powerful variations Execute test
  • CRBA (grensesnitt mot RATS, SPORT og OCD) Prosjektet / testerne hadde info om CRBA og RATS – men liten info om SPORT og OCD. Flere feil ble derfor avdekket I ST for RATS enn for de andre to Flere feil ble funnet I AT for SPORT og OCD.
  • Similar to Bubble (Reverse State) Diagram?? Exploratory testing of ATM-system: Not sufficient funds on a withdrawal where there was funds (but less then 2 times the actual funds). Old system dumped by a commercial bank and bought “cheap” by a stat owned bank. Very little documentation, except a brief description of functionality per record and record layout. Had access to a programmer working on the project years ago (for NCR now AT&T). No dialog / communication documentation. We did not know that every posting transaction was proceeded by a inquiry transaction. The inquiry transaction would “reserve” money (in a way not interpreted correctly by the core system), so sometimes we would get not sufficient funds in the succeeding posting transactions.
  • Tvetydig - uklar (from Cambridge International Dictionary of English ) ambiguous adjective   having or expressing more than one possible meaning, sometimes intentionally   It was hoped that he would clarify the ambiguous remarks he made earlier. The government has been ambiguous on this issue. Her speech was deliberately ambiguous to avoid offending either side. His reply to my question was somewhat ambiguous. The wording of the agreement is ambiguous, so both interpretations are valid. They've always had ambiguous (=uncertain) feelings about whether or not they should have children. ambiguously adverb   Some questions were badly or ambiguously worded. ambiguity noun   We wish to remove any ambiguity (=confusion) concerning our demands.   [U] There are some ambiguities in the legislation.   [C] Lee Copeland: Testing UML Models (StickyMinds.com) Correctness, Completeness, Consistency – focusing on: Syntax Domain testing Traceability
  • In particular: try all examples and guidelines.
  • Here are some that I've been thinking about: General systems modeling and dimensional analysis Dimensional analysis has one meaning in Physics that's not quite what I'm talking about. However, I discovered that Grounded Theory, which is a methodology of qualitative (social) research, uses "dimensional analysis" in exactly the way I do. Dimensional analysis is-- if you can believe it-- the process of analyzing the dimensions of something. For instance, if I were to do dimensional analysis on a wine glass, I would think of the following things: o- volume of the glass o- height of the glass o- width at it's widest point o- deviation from perfect circularity o- mass o- pieces (enumeration of distinct components: base, stem, cup) o- index of refraction o- reflectivity o- opacity o- melting point o- material it's made from o- name of the design (is it a flute? or a snifter? or something else?) o- brittleness o- age o- stresses at each point o- place of manufacture o- current owner o- surface it is currently standing upon o- dimensions of what it contains (temperature, acidity, amount, chemicals, state of matter) o- what it has contained in the past o- sentimental value o- market value Okay, I'll stop. The skill of dimensional analysis is about modelling something in many different ways, any of which may figure in how I will test it. I think of these as dimensions instead of attributes because I want to emphasize that the thing I'm modelling can vary with respect to any of them. I try to imagine scales for each of the dimensions. One way to think of the process is looking at how something you intend to test might be different. For instance, how is testing a 200 year old wine glass different from testing a brand new wine glass? How might different liquids affect a wine glass?
  • · Inferencing out loud Take one of the skills on my list: ”inferencing out loud”. The idea I have in mind is that the ability to talk through a logical progression either forward from evidence to conjecture, or backward from conjecture to evidence, is not a skill very many people have developed. However, it’s vital to have that skill if you want to be effective at the art of persuading other people why the practice you are advocating fits the context at hand. I help people practice this skill in my classes by getting them to find bugs in a product, then explain logically why they think what they see represents a bug (which I define as a threat to the value of the product). Most don’t do it very well, so I offer heuristics that can help, talk about various forms of inference they could use, demonstrate the process, and give them an opportunity to practice in class. Is this a skill that really belongs on the list? Maybe. But maybe again there’s a better way of breaking it down. Maybe we should separate the ”out loud” part from the inferencing part. Maybe there’s another name for it more appropriate than ”inferencing out loud”. Fra: James Bach Til: Ståle Amland Dato: Thu, 22 Aug 2002 11:24:53 -0400 (EDT) Emne: Re: "Inferensing out loud" > In the explanation of inferencing out loud you say that most people do not > do it well (I don't do it well!), and that you offer some heuristics for > people to use in your class. Could you expand a little bit about that > please? What kind of heuristics do you have in mind? If you look in my class notes, you'll see a slide about HICCUPP, which is seven heuristics for thinking a product might have a problem. That's an example of a heuristic to support i nferencing out loud. Let me illustrate: During a testing exercise, I'll ask people to report bugs. I wait for someone to report a bug that is not based on or apparently related to anything in the specification. Then I challenge them on how that can possibly be a bug. What usually happens is that the student is not able to express their reasoning for that. I let student flounder for a little bit, then I jump over to their side of the argument and try to help them illustrate the ladder of inference that led them to the conclusion that there's a bug. Sometimes they just need help putting their thoughts to words. Other times they really haven't thought the matter through. I help them either way. Each of the HICCUP heuristics is about an inconsistency. What the tester notices is that something is not consistent with something else. His job is to present the inconsistency to the developer, along with information about why the inconsistency represents a problem. Perhaps the tester feels that a behavior of the product is inconsistent with the image of the company that he believes we should project. If so, he can talk about why that might be. His case will be more powerful, however, if a bug relates to more than one of the heuristics. When testers get used to the heuristics, they have an easier time explaining themselves and responding to questions about the logic of their beliefs. That's inferencing out loud. -- James Infer , verb , FORMAL to obtain information indirectly What do you infer from her refusal? Although she agreed with me I inferred from her expression that she was reluctant. We inferred from comments they had made to friends that they were unlikely to support us. We can/may infer from the absence of women in university history that higher education was denied them. If you see a man and a woman in a bar holding hands, it's reasonable to infer that they're having some sort of relationship. Inference , noun , FORMAL They were warned to expect a heavy air attack and by inference many casualties. [U] His change of mind was recent and sudden, the inference being that someone had persuaded him. [C] From her reply we drew the inference that she had already seen the document. [C] (from Cambridge International Dictionary of English) Trekke slutning, avlede / utlede, anta basert på...
  • · Experiment design By "experiment design" I mean any skill (including the ability to perform technique known as "Design of Experiments") related to setting up an experiment. This is kind of vague, as yet, because there are a lot of different skills involved, and it's been a while since I read up on it. Also, setting up a social experiment is different from setting up a chemistry experiment. But, basically, I'm talking about the ability to identify and control variables, use control groups, and stuff like that. It's a skill I wish I had more of. · Technical story telling · Use of mnemonics and heuristics mnemonic noun, adjective   [C]   (something such as a very short poem or a special word) used to help a person remember something   'Roy G Biv' is a mnemonic for the colours of the spectrum and the order in which they appear: red, orange, yellow, green, blue, indigo, violet. · De-biasing (individual or team) · Exploratory investigation · Risk analysis · Bug advocacy
  • SUMMARY: Be prepared! Be ready! Understand science and master the craft . At the heart of all ET styles: Questions and Questioning Skills STYLES: Hunches ET is NOT “random” Models We usually think of modeling in terms of preparation for formal testing, but there is no conflict between modeling and exploration. Both types of tests start from models. The difference is that in exploratory testing, our emphasis is on execution (try it now ) and learning from the results of execution rather than on documentation and preparation for later execution. Failure models; How to break software? 18 different attack grouped into Input, output, storage and computational contraints. Examples Use Cases Scenarios Soap Operas Invariances Interference Interrupt, Change, Stop, Pause, Swap, Compete Error Handling Review error messages Troubleshooting Variations when retesting fixes Group Insight Brainstorming Pair testing Specifications Satsifice Test Model User Manual Consistency Heuristic: HICCUPP

ET Workshop v. 1.20 - Skills and Techniques ET Workshop v. 1.20 - Skills and Techniques Presentation Transcript

  • Exploratory Test Styles
    • These slides are distributed under the Creative Commons License.
    • In brief summary, you may make and distribute copies of these slides so long as you give the original author credit and, if you alter, transform or build upon this work, you distribute the resulting work only under a license identical to this one.
    • For the rest of the details of the license, see http://creativecommons.org/licenses/by-sa/2.0/legalcode.
  • Styles of Exploration Outline Introduction Test Management and Techniques ET Planning, Exec . and Documentation ET Styles ET Management
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error handling
    • Troubleshooting
    • Group Insight
    • Specifications
    5. 4. 3. 2. 1.
  • 4. Exploratory Test Styles Skills and Techniques 1. 2. 3. 4. 5.
  • “ In the fields of observation, chance favors only those minds which are prepared.” Louis Pasteur Våga et. al. 2002 Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Approaches / Styles of ET
    • At the heart of all ET styles:
      • Questions and Questioning Skills
    • Characterize the styles with respect to each other:
      • Do they focus on:
        • Method of questioning ?
        • Method of describing or analysing the product ?
        • The details of the product ?
        • The patterns of use of the product ?
        • The environment in which the product is run?
      • To what extent would this style benefit from group interaction?
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Styles of Exploration
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Hunches
    • ”Random”
    • Questioning
    • Similarity to previous errors
    • Following up gossip and predictions
    • Follow up recent changes
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Models
    • Architecture diagrams
    • Bubble diagrams
    • Data relationships
    • Procedural relationship
    • Model-based testing (state matrix)
    • Requirements definition
    • Functional relationship (for regression t esting)
    • Failure models
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Architecture Diagrams
    • Work from a high level design (map) of the system
      • Pay primary attention to interfaces between components or groups of components. We’re looking for cracks that things might have slipped through
      • What can we do to screw things up as we trace the flow of data or the progress of a task through the system?
    • You can build the map in an architectural walkthrough
      • Invite several programmers and testers to a meeting. Present the programmers with use cases and have them draw a diagram showing the main components and the communication among them. For a while, the diagram will change significantly with each example. After a few hours, it will stabilize.
      • Take a picture of the diagram, blow it up, laminate it, and you can use dry erase markers to sketch your current focus.
      • Planning of testing from this diagram is often done jointly by several testers which understand different part of the system.
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Models From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Bubble (Reverse State) Diagram
    • To trouble shoot a bug, a programmer will often work the code backwards, starting with the failure state and reading for the state that could have led to it (and the states that could have led to those).
    • The tester imagines a failure instead, and asks how to produce it.
      • Imagine the program being in a failure state. Draw a bubble.
      • What would have to have happened to get the program here? Draw a bubble for each immediate precursor and connect the bubbles to the target state.
      • For each precursor bubble, what would have happened to get the program there? Draw more bubbles.
      • More bubbles, etc.
      • Now trace through the paths and see what you can do to force the program down one of them.
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Models From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Data Relationship
    • Pick a data item
    • Trace its flow through the system
    • What other data items does it interact with?
    • What functions use it?
    • Look for inconvenient values for other data items or for the functions, look for ways to interfere with the function using this data item
    Styles of Exploration: Models From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Procedural Relationships
    • Pick a task
    • Step by step, describe how it is done and how it is handled in the system (to as much detail as you know)
    • Now look for ways to interfere with it, look for data values that will push it toward other paths, look for other tasks that will compete with this one, etc.
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Models From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Functional Relationships
    • A model (what you can do to establish a strategy) for deciding how to decide what to regression test after a change:
    • Map program structure to functions.
      • This is (or would be most efficiently done as) a glass box task. Learn the internal structure of the program well enough to understand where each function (or source of functionality) fits
    • Map functions to behavioral areas (expected behaviors)
      • The program misbehaved and a function of functions were changed. What other behaviors (visible actions or options of the program) are influenced by the functions that were changed?
    • Map impact of behaviors on the data
      • When a given program behavior is changed, how does the change influence visible data, calculations, contents or data files, program options, or anything else that is seen, heard, sent, or stored?
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Models From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • State Model-Based Testing
    • Look at
      • All the possible inputs the software can receive, then
      • All the operational modes, (something in the software that makes it work differently if you apply the same input)
      • All the actions that the software can take
      • Do the cross product of those to create state diagrams so that you can see and look at the whole model
    • Example:
      • Spent 5 hours looking at the API list, found 3 – 4 bugs, then spent 2 days making a model and found 272 bugs. The point is that you can make a model that is too big to carry in you head. Modeling shows inconsistencies and illogicalities.
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Models From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • State Transition Diagram – File Transfer
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    File transfer Application 1 will request download of file File #N transfer Initial state: req. prep. Modem responding: Off hook Modem NOT responding Server responding “ Tech” Failure Server NOT responding Application Connected NO modem Connection File #1 transfer File SHARING violation Invalid file request #1 transfer COMPLETE #1transfer ABORTED Styles of Exploration: Models Off hook Dial Application1 request file M odems negotiate Modems Connected Applic. negotiate NO appl. Connection Appl. Failure “ Com.” Failure Example Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Failure Model
    • Whittaker: Why Software Fails (1999/2002) “The Fundamental cause of software errors”
    • Constraint violations
      • Input constraints
        • Such as buffer overflow
      • Output constraints
      • Computations
        • Look for divide by zeros and rounding errors. Figure out inputs that you give the system that will make it not recognize the wrong outputs
      • Data violations
      • Really good for finding security holes
    Styles of Exploration: Models From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • “ How to Break Software” (1)
    • Being a tester means finding bugs efficiently:
      • Set clear goals for every test case
      • Understand where bugs might hide
      • Know how to expose them
    • The method:
      • Collect and study a large number of bugs in released products
      • Understand why they occur and what type of test it would take to find them
      • Generalize the test into “attack patterns” and teach students how to execute these patterns
      • Collect even more bugs, classify them and refine the attacks
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Failure Models How to Break Software, Whittaker et. al. 2000 Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • “ How to Break Software” (2)
    • Input Constraint Attacks:
      • Force all error messages to occur
      • Apply inputs that force default values
      • Explore character sets and data types
      • Overflow input buffer
      • Find inputs that may interact
        • Test various combinations of their values
      • Repeat the same inputs many times
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Failure Models How to Break Software, Whittaker et. al. 2000 Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • “ How to Break Software” (3)
    • Output Constraint Attacks
      • Force different outputs for each input
      • Force invalid outputs
      • Force output size change
      • Force output to exceed output space
      • Force the screen to refresh
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Failure Models How to Break Software, Whittaker et. al. 2000 Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • “ How to Break Software” (4)
    • Storage Constraint Attacks
      • Apply inputs under differing initial conditions
      • Data Structure Over/Underflow
        • Force a data structure to store too many or too few values
      • Find alternate ways to violate internal data constraints
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Failure Models How to Break Software, Whittaker et. al. 2000 Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • “ How to Break Software” (5)
    • Computation Attacks
      • Experiment with invalid operand and operator combinations
      • Force a function to call itself recursively
      • Force computation results to be too large or too small
      • Find features that share data or interact poorly
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Styles of Exploration: Failure Models How to Break Software, Whittaker et. al. 2000 Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Exercise 3
    • Select a different area of StarOffice (or continue where you left, if you prefer and are productive), or
    • Select defect handling in MiniTest
    • Create a chart er (including a mission) and select a (different) testing style and continue to test the AUT
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Examples
    • Use Cases
    • Simple Walkthroughs
    • Positive Thinking
    • Scenarios
    • Soap Operas
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Use Cases
    • List the users of the system
    • For each user, think through the tasks they want to do
    • Create test cases to reflect their simple and complex uses of the system
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Example
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • For each Use Case verify…
    • Sufficient test s , positive and negative, have been identified for each flow of events (for the use cases that traverse your target-of-test)
    • Test s to address any business rules implemented by the use cases, ensuring that there are test s inside, outside, and at the boundary condition / value for the business rule
    • Test s to address any sequencing of events or actions, such as those identified in the sequence diagrams in the design model, or user interface object states or conditions.
    • Test s to address any special requirements defined for the use case, such minimum/maximum performance, sometimes combined with minimum/maximum loads or data volumes during the execution of the use cases.
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Modified from Rational Unified Process (RUP), ©Rational Styles of Exploration: Example Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Simple Walkthroughs
    • Test the program broadly, but not deeply.
      • Walk through the program, step by step, feature by feature.
      • Look at what’s there.
      • Feed the program simple, non-threatening inputs.
      • Watch the flow of control, the displays, etc.
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Example
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Positive Testing
    • Try to get the program working in the way that the programmers intended it.
    • One of the points of this testing is that you educate yourself about the program. You are lookin at it and learning about it from a sympathetic viewpoint, using it in a way that will show you what the value of the program is.
    • This is true “positive” testing – you are trying to make the program show itself off, not just trying to confirm that all the features and functions are there and kind of sort of working.
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Example
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Scenarios
    • The ideal scenario has several characteristics:
      • It is realistic (e.g. it comes from actual customer or competitor situations)
      • There is no ambiguity about whether a test passed or failed
      • The test is complex, that is, it uses several features and functions
      • There is a stakeholder who will make a fuss if the program doesn’t pass this scenario
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Example
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Soap Operas
    • Build a scenario based on real-life experience. This means client / customer experience.
    • Exaggerate each aspect of it:
      • Example, for each variable, substitute a more extreme value
      • Example, if a scenario can include a repeating element, repeat it lots of times
      • Make the environment less hospitable to the case (increase or decrease memory, printer resolution video resolution, etc.)
    • Create a real-life story that combines all of the elements into a test case narrative.
    Cem Kaner referensing Hans Buwalda (2001) Styles of Exploration: Example - Scenarios
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Invariances
    • Making changes that should NOT affect the program.
    • Examples:
      • Sending text and graphics in different orders to a printer
      • Using VERY large files with programs that should handle large files
      • Mathematical operations in different but equivalent orders
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Interference
    • Interrupt
    • Change
    • Stop
    • Pause
    • Swap
    • Compete
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Interrupt
    • Generate interrupts
      • From a device related to the task (e.g. pull out a paper tray, perhaps one that isn’t in use while the printer is printing)
      • From a device unrelated to the task (e.g. move the mouse and click while the printer is printing)
      • From a software event
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Interference
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Change
    • Change something that this task depends on
      • Swap out a floppy
      • Change the printer that the program will print to (without signaling a new driver)
      • Change the video resolution
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Interference
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Stop
    • Cancel the task (at different points during its completion)
    • Cancel some other task while this task is running
      • A task that is in communication with this task (the core task being studied)
      • A task that will eventually have to complete as a prerequisite to completion of this task
      • A task that is totally unrelated to this task
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Interference
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Pause
    • Find some way to create a temporary interruption in the task
    • Pause the task
      • For a short time
      • For a long time (long enough for a timeout, if one will arise)
    • Put the printer on local
    • Put a database under use by a competing program, lock a record so that it can’t be accessed – yet!
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Interference
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Swap (out of memory)
    • Swap the process out of memory while it is running (e.g. change focus to another application and keep loading or adding applications until the application under test is paged to disk
      • Leave it swapped out for 10 minutes or whatever the timeout period is. Does it come back? What si its state? What is the state of processes that are supposed to interact with it?
      • Leave it swapped out much longer than the timeout period. Can you get it to the point where it is supposed to time out, then send a message that is supposed to be received by the swapped-out process, then time out on the time allocated for the message? What are the resulting state of this process and the one(s) that tried to communicate with it?
    • Swap a related process out of memory while the process under test is running.
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Interference
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Compete
    • Examples:
    • Compete for a devise such as a printer
      • Put device in use, then try to use it from software under test
      • Start using device, then use it from other software
      • If there is a priority system for device access, use software that has higher, same and lower priority access to the device before and during attempted use by software under test.
    • Compete for processor attention
      • Some other process generates an interrupt (e.g. ring into the modem, or a time-alarm in your contact manager)
      • Try to do something during heavy disk access by another process
    • Send this process another job while one is underway
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Interference
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Error Handling
    • Review possible error messages
      • Press the wrong key at the error dialog
      • Make the error several times in a row
    • Device related errors
      • (disk full, printer not ready etc.)
    • Data-input errors
      • Corrupt files, missing data, wrong data etc.
    • Stress / Volume
      • Huge files, too many files, tasks, devices, fields, records etc.)
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Troubleshooting
    • We often do exploratory tests when we troubleshoot bugs:
    • Bug Analysis
      • Simplify the bug by deleting or simplifying steps
      • Simplify the bug by simplifying the configuration (or background tools)
      • Clarify the bug by running variations to see what the problem is
      • Clarify the bug by identifying the version that it entered the product
      • Strengthen the bug with follow-up tests (using repetition, related tests, related data, etc.) to see if the bug left a side effect
      • Strengthen the bug with tests under a harsher configuration
    • Bug regression: vary the steps in the bug report when checking i f the bug was fixed
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Group Insight
    • Brainstormed test lists
    • Group discussion of related components
    • Fishbone analysis
    • Paired Exploratory Testing
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Brainstormed Test Lists
    • Example (copy):
    • Here is the program’s specification:
      • This program is designed to add two numbers, which you will enter
      • Each number should be one or two digits
      • The program will print the sum. Press Enter after each number
      • To start the program, type ADDER
    • Before you start testing, do you have any questions about the spec?
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Group Insight Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Brainstormed Test Lists
    • Summary of Example:
      • You brainstormed a list of tests for the two-variable, two-digit problem:
        • The group listed a series of cases (test case, why)
        • You then examined each case and the class of tests it belonged to, looking for a more powerful variation of the same test.
        • You then ran these tests.
      • You can apply this approach productively to any part of the system.
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Group Insight Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Group Discussion of Related Components
    • The objective is to test the interaction of two or more parts of the system
    • The people in the group are very familiar with one or more parts. Often, no one person is familiar with all of the parts of interest, but collectively the ideal group knows all of them.
    • The group looks for data values, timing issues, sequence issues, competing tasks, etc. that might screw up the orderly interaction of the components under study.
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Group Insight Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Fishbone Analysis
    • Fishbone analysis is a traditional failure analysis technique. Given that the system has shown a specific failure, you work backwards through precursor states (the various paths that could conceivably lead to this observed failure state).
    • As you walk through, you say that Event A couldn’t have happened unless Event B or Event C happened. And B couldn’t have happened unless B1 or B2 happened. And B1 couldn’t have happened unless X happened, etc.
    • While you draw the chart, you look for ways to prove that X (whatever, a precursor state) could actually have been reached. If you succeed, you have found one path to the observed failure.
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Group Insight
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Paired Exploratory Testing
    • See previously presentation on testing in pairs, section 3.2 – Exploratory Testing in Pairs.
    • Developed independently of paired programming, but many of the same problems and benefits apply.
    • The eXtreme Programming community has a great deal of experience with paired work and offers many lessons:
      • Kent Beck, Extreme Programming Explained
      • Ron Jeffries, Ann Anderson & Chet Hendrickson , Extreme Programming Installed
    • Laurie Williams of NCSU does research in pair programming. For her publications, see http:// collaboration.csc.ncsu.edu/laurie /
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Group Insight Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Specifications
    • Active reading – Tripos
    • Active reading – Ambiguity analysis
    • User Manual
    • Consistency Heuristics
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration:
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Active Reading – Developing Questions
    • Satisfice Testing Model:
      • When you run out of testing ideas, walk the chart looking for a project / product / quality factor that you haven’t based a test on recently
      • Randomly combine project / product / quality factors – make up a test case that is influenced by the selected product factor, that test the selected product element against the selected quality criterion.
      • Analyze a specification, operating on the assumption that every statement defines a project factor, a product factor or a quality criterion.
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Specifications Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Active Reading (Ambiguity Analysis)
    • There are all sorts of sources of ambiguity in software design and development
      • In the wording or interpretation of specifications or standards
      • In the expected response of the program to invalid or unusual input
      • In the behavior of undocumented features
      • In the conduct and standards of regulators / auditors
      • In the customers’ interpretation of their needs and the needs of the users they represent
      • In the definitions of compatibility among 3 rd party products
    • Whenever there is ambiguity, there is a strong opportunity for a defect (at least in the eyes of anyone who understands the world differently from the implementation).
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Specifications Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • User Manual
    • Write part of the user manual and check the program against it as you go.
    • Any writer will discover bugs this way.
    • An exploratory tester will discover quite a few this way.
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Specifications
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • Consistency Heuristics
    • Discussed previously: HICCUPP
      • Consistent with History : Present function behavior is consistent with past behavior.
      • Consistent with our Image : Function behavior is consistent with an image that the organization wants to project.
      • Consistent with Comparable Products: Function behavior is consistent with that of similar functions in comparable products.
      • Consistent with Claims : Function behavior is consistent with what people say it’s supposed to be.
      • Consistent with User’s Expectations: Function behavior is consistent with what we think users want.
      • Consistent within Product : Function behavior is consistent with behavior of comparable functions or functional patterns within the product.
      • Consistent with Purpose : Function behavior is consistent with apparent purpose.
    From Black Box Software Testing, copyright © 1996 – 2002 Cem Kaner Styles of Exploration: Specifications
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error Handling
    • Troubleshooting
    • Group Insight
    • Specifications
    Introduction Test Management and Techniques ET Planning and Documentation ET Styles ET Management
  • “ I thought skill is the ability to something, more or less. Skill varies from person to person. It’s distinct from talent and knowledge. A technique, by contrast, is a way of doing something; a sort of recipe. Skill belongs to a person, technique is universal.” James Bach, Satisfice Inc. [email_address]
  • Styles of Exploration Summary: Skills
    • General systems modelling and dimensional analysis
    • Inferencing out loud
    • Experiment design
    • Technical story telling
    • Use of mnemonics and heuristics
    • De-biasing (individual or team)
    • Exploratory investigation
    • Risk analysis
    • Bug advocacy
    • +++?
    [email_address] 1. 2. 3. 4. 5.
  • ... and dimensional analysis
    • ...is the process of analyzing the dimensions of something.
    • Example – dimensional analysis done on a wine glass :
      • volume of the glass
      • height of the glass
      • width at it's widest point
      • deviation from perfect circularity
      • mass
      • pieces (enumeration of distinct components: base, stem, cup)
      • melting point
      • material it's made from
      • age
      • surface it is currently standing upon
      • sentimental value
      • market value
    • The skill of dimensional analysis is about modelling something in many different ways, any of which may figure in how w will test it
    [email_address] 1. 2. 3. 4. 5.
  • Inferencing Out Loud
    • T he ability to talk through a logical progression either forward from evidence to conjecture, or backward from conjecture to evidence
    • Using the HICCUPP -heuristics:
      • Consistent with History: Present function behavior is consistent with past behavior.
      • Consistent with our Image: Function behavior is consistent with an image that the organization wants to project.
      • Consistent with Comparable Products: Function behavior is consistent with that of similar functions in comparable products.
      • Consistent with Claims: Function behavior is consistent with what people say it’s supposed to be.
      • Consistent with User’s Expectations: Function behavior is consistent with what we think users want.
      • Consistent within Product: Function behavior is consistent with behavior of comparable functions or functional patterns within the product.
      • Consistent with Purpose: Function behavior is consistent with apparent purpose.
    [email_address] 1. 2. 3. 4. 5.
  • Styles of Exploration Summary: Skills
    • General systems modelling and dimensional analysis
    • Inferencing out loud
    • Experiment design
    • Technical story telling
    • Use of mnemonics and heuristics
    • De-biasing (individual or team)
    • Exploratory investigation
    • Risk analysis
    • Bug advocacy
    • +++?
    [email_address] 1. 2. 3. 4. 5.
  • Heuristics (and rules) and Skills “… we relate to heuristics as a tool to apply; something that might help us do the right thing in a given situation, whereas we relate to a rule as something to comply with; something that defines right behavior. Using heuristics properly requires that you exercise discretion and judgment, on some level; whereas judgment may get in the way of rules. It's helpful to have contradictory heuristics, because that's like having a variety of advice available before making a decision; whereas contradictory rules make compliance impossible. [email_address] 1. 2. 3. 4. 5.
  • Exercise 4
    • Select a different area of StarOffice or MiniTest (or continue where you left off if you prefer and are productive), or
    • Select WEB site: www.amland.no/et_test
    • Create a chart er (including a mission) and select a (different) testing style and continue to test the AUT
  • Styles of Exploration Summary Introduction Test Management and Techniques ET Planning, Exec. and Documentation ET Styles ET Management
    • Hunches
    • Models
    • Examples
    • Invariances
    • Interference
    • Error handling
    • Troubleshooting
    • Group Insight
    • Specifications
    1. 2. 3. 4. 5.