Ordenação Topológica

432 views
342 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
432
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Just as there is a lifecycle for software development, there is one for testing. 1. ANALYSIS -- Given the specification, determine a test strategy (and/or plan). 2. DESIGN -- Given the specification and strategy, apply techniques to derive test cases. 3. IMPLEMENTATION -- Build the machinery necessary to execute the tests, including data and manual scripts or automated scripts (drivers). 4. EXECUTION -- Perform the tests according to the script/driver, and capture results. 5. EVALUATION -- Evaluate test results to classify errors found, and record deficiencies.
  • Recall that sound testing practice is based on the SPECIFICATION and a systematic, repeatable process for test case design. This session deals with achieving ACCOUNTABILITY -- producing evidence that such a systematic process has been followed -- in a COST-EFFECTIVE way.
  • The value of the decision table may not become clear until you are faced with a very complex unit with many conditions and combinations of conditions. The examples we have seen so far have simple stimuli and responses, corresponding to arguments passed and values returned by the function. In general, a surprising amount of effort may have to go into identifying all stimuli and responses.
  • Identifying behavior means identifying all stimuli and all responses, AND relating responses to stimuli. 1. Start with responses, then identify situations that cause each combination of responses, then identify individual stimuli. 2. OR start by identifying stimuli, then identify combinations of stimuli that cause different behavior, then identify each response. EXPECT TO COMBINE THESE APPROACHES
  • ENLARGE your view of the TYPES of STIMULI: 1. For units, the most common types are arguments passed when the unit is called. 2. Interactive inputs (keyboard or mouse) 3. Internal data such as global variables my be used to control a unit or to alter a value calculated by the unit 4. External data from files or databases, including the FILE STATUS (end-of-file, access errors, etc). 5. The occurrence of exceptions may alter the behavior of a unit.
  • STIMULI define TEST CASE DATA When a stimulus is missed, that aspect of the unit's behavior can never be tested. The process of finding out what the stimuli are stimulates discussion and clarification of the interfaces to the unit. Such questioning is likely to reveal an incomplete design.
  • The types of RESPONSES are the same as the types of STIMULI.
  • RESPONSES define EXPECTED RESULTS for test cases. 1. Incomplete set of responses --> Incomplete and erroneous verification. 2. Responses are not always visible -- effects may show up later on, for example: - Failure to initialize variable Sum to zero leads to overflow after adding 30 values.
  • Again we see the schematic template which lists the types of Stimuli and Responses. When analyzing behavior, question whether there are stimuli (responses) of each type. If so, list them by name (or description). RECALL: Stimuli tend to be used in the CONDITION part of the Decision Table. Responses appear in the ACTION part, along with those stimuli that provide values used in the computation of a response.
  • 1. A Decision Table is a SPECIFCIATION. 2. The specification is used for DESIGN AND FOR TESTING. 3. Killing TWO BIRDS with ONE STONE!
  • We now show you a way to derive boundary test cases from the specification. The approach is 1. Analyze specification to identify behaviors. 2. From the decision table CONDITION section, identify boundary conditions. 3. For each boundary condition, derive boundary points. 4. Generate boundary test cases from the boundary points. 5. Add the boundary test cases to the functional test cases already in the test script.
  • The one change to the specification is the introduction of a salaried employee who does not get paid overtime or straight time, but for exactly 40 hours.
  • Analysis of the specification identifies three behaviors, TWO for hourly employees, and ONE for salaried.
  • We can use a decision table to organize our analysis. 1. The TOP (CONDITION) part of the table addresses the conditions that define the behaviors. 2. The BOTTOM (ACTION) part of the table identifies the different formulas (actions) taken by the unit, based on COMBINATIONS of conditions. NOTE: 1. More than one action may be taken for a combination of conditions 2. Distinct condition combinations may result in the same action(s) being taken.
  • Given the decision table, the rule is: GENERATE ONE TEST CASE FOR EACH COLUMN . That is, each column defines a FUNCTIONAL EQUIVALENCE CLASS.
  • Record the test cases in the test script, entering the cases in the same order as the COLUMNS of the decision table.
  • 5 The DRIVER must be like a human tester: 1) Driver has access to a set of test cases to be executed. 2) Driver invokes the unit, passing test case data as arguments, and receiving the results. 3) Driver logs the test data and results to a test results file. Additional effort may be required to account for external responses (e.g., data written to files).
  • 5 A test driver is a simple DESIGN PATTERN 1) Simplest form -- when the unit is a simple function whose only stimuli are parameters,and whose only responses are parameters or return values. 2) When more types of stimuli/responses are involved, the driver has to perform additional set-up before performing tests, and more verification after performing tests. 3) Drivers, once constructed, enable efficient, repeatable testing.
  • 5 Here is the test driver for the Unit Pay. 1) The Unit Environment is the code for Pay. 2) The driver contains variables for the arguments of Pay and the result returned. 3) There are also variables for counting test cases and for the expected result. 4) Test data input file contains test cases. 5) The following is done for each test case: a) Test case is read from file b) Test case data is passed to the unit, and result capture c) The test case data and results are written to the test results file. 6) The results file documents the test session.
  • 5 Here is an example of the Driver input and output files. 1) The input file is just like a TEST SCRIPT. 2) The results file DOCUMENTS TEST RESULTS. 3) Visual inspection determines FAILURES for the OVERTIME test case. 4) Intelligent Driver can contain an ORACLE that ANNOUNCES PASS/FAIL for each test case.
  • 5 The test driver for a class is similar to that for an elementary unit: 1) Reads test cases from a file 2) Writes test results to a file The class test driver is DIFFERENT: 1) Calls MULTIPLE functions, passing test data and receiving test results. 2) Must distinguish WHICH FUNCTION is being tested. The class test driver EXTENDS the elementary unit test driver in difference #2.
  • 5 1) A stack is a collection managed by the rule: First In, Last Out. It works like a stack of plates at a cafeteria. An instance of a stack is called a STACK OBJECT. 2) ATTRIBUTES: data about the collection -- the PRIVATE section above, the STATE of the stack object. 3) BEHAVIOR: the METHODS (functions) in the PUBLIC section above. Stack -- creates an empty stack. Push -- adds another value to top of the stack Pop -- removes the value at the top of the stack Top -- returns the value at the top of the stack Empty -- returns TRUE when stack is empty 4) Methods UPDATE or INQUIRE the state.
  • 5 Before showing the driver, here are the input and output files used by the driver. 1) The input file contains METHOD TEST CASES. 2) Each method is assigned a UNIQUE NUMBER. 3) The test data required depends on which method is being tested. 4) Expected results depend on the overall behavior of the object (stack) under a SEQUENCE of method test cases. The results file also identifies the method being tested. What is wrong with this stack? Which method contains the error?
  • Ordenação Topológica

    1. 1. Evolving an Elective Software Testing Course: Lessons Learned Edward L. Jones Florida A&M University Tallahassee, FL USA 3rd Workshop on Teaching Software Testing
    2. 2. Agenda <ul><li>Course Overview </li></ul><ul><li>Student Background </li></ul><ul><li>Driving Principles </li></ul><ul><li>Overview of Assignments </li></ul><ul><li>Course Reflection </li></ul><ul><li>Improvements </li></ul><ul><li>Assignment Walkthroughs </li></ul>
    3. 3. Course Overview DESCRIPTION: The purpose of this course is to build skills necessary to perform software testing at the function, class and application level. Students will be taught concepts of black-box (functional and boundary) and white-box (coverage-based) testing, and will apply these concepts to small programs and components (functions and classes). Students will also be taught evaluative techniques such as coverage and mutation testing (error seeding). This course introduces the software engineering discipline of software quality engineering and the legal and societal issues of software quality.
    4. 4. Programming Focus AUDIENCE: Not software testers but software developers. What distinguishes the course approach is that is stresses the programming aspect of software testing. A goal is to enhance and expand students’ programming skills to support activities across the testing lifecycle: C++ programming and Unix shell script programming to automate aspects of software testing. Students just needed a course to take ...
    5. 5. Conceptual Objectives <ul><li>The student shall understand </li></ul><ul><ul><li>The software testing lifecycle </li></ul></ul><ul><ul><li>The relationship between testing, V&V, SQA </li></ul></ul><ul><ul><li>Theoretical/practical limits of software testing </li></ul></ul><ul><ul><li>The SPRAE testing framework </li></ul></ul><ul><ul><li>Concepts and techniques for black-/white-box testing </li></ul></ul><ul><ul><li>Test case design from behavioral model </li></ul></ul><ul><ul><li>Design patterns for test automation </li></ul></ul><ul><ul><li>Test coverage criteria </li></ul></ul><ul><ul><li>Issues of software testing management </li></ul></ul>
    6. 6. Performance Objectives <ul><li>The student shall be able to: </li></ul><ul><ul><li>Use the Unix development environment </li></ul></ul><ul><ul><li>Write simple Unix shell scripts </li></ul></ul><ul><ul><li>Design functional and boundary test cases </li></ul></ul><ul><ul><li>Develop manual test scripts </li></ul></ul><ul><ul><li>Conduct tests and document results </li></ul></ul><ul><ul><li>Write test drivers to automate function, object and application testing </li></ul></ul><ul><ul><li>Evaluate test session results; write problem reports </li></ul></ul>
    7. 7. Learning/Evaluation Activities <ul><li>80% practice / 20% concepts </li></ul><ul><li>Lectures (no text) </li></ul><ul><li>Laboratory assignments </li></ul><ul><ul><li>Unix commands and tools </li></ul></ul><ul><ul><li>Testing tasks </li></ul></ul><ul><li>Examinations </li></ul><ul><ul><li>2 Online tests </li></ul></ul><ul><ul><li>Final (online) </li></ul></ul><ul><li>Amnesty period (1 test / 2 labs) </li></ul>
    8. 8. Student Background <ul><li>Reality: </li></ul><ul><ul><li>20 students </li></ul></ul><ul><ul><li>Not particularly interested in testing </li></ul></ul><ul><ul><li>Low programming skill/experience </li></ul></ul><ul><li>Ideal: </li></ul><ul><ul><li>An interest in software testing </li></ul></ul><ul><ul><li>Strong programming skills </li></ul></ul><ul><ul><li>Scientific method (observation, hypothesis forming) </li></ul></ul><ul><ul><li>Sophomore or junior standing </li></ul></ul><ul><ul><li>Desire for internship in software testing </li></ul></ul>
    9. 9. My Perspective on Teaching Testing <ul><li>Testing is not just for testers! </li></ul><ul><li>In ideal world, fewer testers required </li></ul><ul><ul><li>Developers have tester’s skills/mentality </li></ul></ul><ul><ul><li>Testing overlays development process </li></ul></ul><ul><li>No silver bullet … just bricks </li></ul><ul><ul><li>Simple things provide leverage </li></ul></ul><ul><ul><li>No one-size-fits-all </li></ul></ul><ul><li>Be driven by a few sound principles </li></ul>
    10. 10. Driving Principles <ul><li>Testing for Software Developers </li></ul><ul><ul><li>Duality of developer and tester </li></ul></ul><ul><li>Few Basic Concepts </li></ul><ul><ul><li>Testing lifecycle </li></ul></ul><ul><ul><li>Philosophy / Attitudes (SPRAE) </li></ul></ul><ul><li>Learn By Doing </li></ul><ul><ul><li>Different jobs across the lifecycle </li></ul></ul>
    11. 11. A Testing Lifecycle Analysis Design Implementation Execution Evaluation Specification Test Strategy/Plan Test Script, Data, Driver Defect Data Problem Reports Test Results Test Cases
    12. 12. Experience Objectives <ul><li>Student gains experience at each lifecycle stage </li></ul><ul><li>Student uses/enhances existing skills </li></ul><ul><li>Student applies different testing competencies </li></ul><ul><li>Competencies distinguish novices from the experienced </li></ul>
    13. 13. A Framework for Practicing Software Testing <ul><ul><li>S pecification the basis for testing </li></ul></ul><ul><ul><li>P remeditation (forethought, techniques) </li></ul></ul><ul><ul><li>R epeatability of test design, execution, and evaluation (equivalence v. replication) </li></ul></ul><ul><ul><li>A ccountability via testing artifacts </li></ul></ul><ul><ul><li>E conomy (efficacy) of human, time and computing resources </li></ul></ul>
    14. 14. Key Test Practices <ul><li>Practitioner -- performs defined test </li></ul><ul><li>Builder -- constructs test “machinery” </li></ul><ul><li>Designer -- designs test cases </li></ul><ul><li>Analyst -- sets test goals, strategy </li></ul><ul><li>Inspector -- verifies process/results </li></ul><ul><li>Environmentalist -- maintains test tools & environment </li></ul><ul><li>Specialist -- performs test life cycle. </li></ul>
    15. 15. Test Products <ul><li>Test Report (informal) of manual testing </li></ul><ul><li>Test Scripts for manual testing </li></ul><ul><li>Test Log (semi-formal) </li></ul><ul><li>Application Test Driver (Unix shell script) </li></ul><ul><li>Unit/Class Test Driver (C++ program) </li></ul><ul><li>Test Data Files </li></ul><ul><li>Test Results (automated) </li></ul><ul><li>Bug Fix Log (informal) </li></ul>
    16. 16. Specification Products <ul><li>Narrative specification </li></ul><ul><li>Specification Diagrams </li></ul><ul><li>Specification Worksheet (pre/post conditions) </li></ul><ul><li>Decision Tables </li></ul><ul><li>Control Flow Graphs </li></ul>
    17. 17. Assignments Target Skills <ul><li>Observation Skills </li></ul><ul><ul><li>Systematic exploration of software behavior </li></ul></ul><ul><li>Specification Skills </li></ul><ul><ul><li>Describe expected or actual behavior </li></ul></ul><ul><li>Programming Skills </li></ul><ul><ul><li>Coding for development of test machinery </li></ul></ul><ul><li>Test Design Skills </li></ul><ul><ul><li>Derive test cases from specification using technique </li></ul></ul><ul><li>Team Skills </li></ul><ul><ul><li>Work with other testers </li></ul></ul>
    18. 18. Course Reflection <ul><li>Testing is programming intensive </li></ul><ul><li>Testing requires analytical skills and facility with mathematical tools </li></ul><ul><li>Testing generates data management problem that is amenable to automation </li></ul><ul><li>Testing gives students advantage in entry-level positions </li></ul><ul><li>Students take this course too late </li></ul>
    19. 19. Failed Course Expectations <ul><li>Students test at all levels </li></ul><ul><ul><li>No “in-the-large” application (e.g., web-based) </li></ul></ul><ul><li>Students develop intuitive testing skills </li></ul><ul><ul><li>On largest project, concepts did not transfer </li></ul></ul><ul><ul><li>1 in 3 students show “knack” for testing </li></ul></ul><ul><li>Impact of balance of concept and experience </li></ul><ul><ul><li>Poor performance on exams with problems like those in labs </li></ul></ul><ul><li>Test case design skills low </li></ul><ul><ul><li>Homework needed v. labs (programming) </li></ul></ul><ul><li>Mentoring (timely feedback) did not occur </li></ul><ul><ul><li>Students left to own devices too much </li></ul></ul>
    20. 20. Why These Outcomes? <ul><li>Formalisms important, but difficult </li></ul><ul><ul><li>Provide the behavior model (e.g., decision table) </li></ul></ul><ul><ul><li>Basis for systematic test case design, automation </li></ul></ul><ul><li>Lack of textbook </li></ul><ul><ul><li>Students need concepts + lots of examples </li></ul></ul><ul><li>Poor availability when students were working </li></ul><ul><ul><li>Students worked at last minute </li></ul></ul><ul><ul><li>Not always around </li></ul></ul><ul><ul><li>Automated grading lacked 1-1 feedback </li></ul></ul><ul><li>Standards-rich/tool-poor environment a distraction </li></ul><ul><li>Assigned work too simple?? </li></ul>
    21. 21. Proposed Changes <ul><li>Improve lecture notes and example bank </li></ul><ul><ul><li>Find and refine </li></ul></ul><ul><ul><li>Resources and workbook </li></ul></ul><ul><li>Outside-in: testing in-the-large before in-the-small </li></ul><ul><li>Recitation/laboratory for discussion and feedback </li></ul><ul><li>Increase use of testing tools (no-cost) </li></ul><ul><li>Increase use of collection of code/applications </li></ul><ul><li>Examination testbank for practice, learning </li></ul>
    22. 22. Assignment Walkthroughs <ul><li>(see paper) </li></ul>
    23. 23. Assignment Walkthroughs <ul><li>Blind Testing </li></ul><ul><li>Test Documentation </li></ul><ul><li>Specification </li></ul><ul><li>Test Automation via Shell Scripts </li></ul><ul><li>Unit Test Automation (Driver) </li></ul><ul><li>White-Box Unit Testing </li></ul><ul><li>Class Testing </li></ul>
    24. 24. Blind Testing I <ul><li>Objective: Explore behavior of software without the benefit of a specification </li></ul><ul><li>Given: Executables + general description </li></ul><ul><li>Results: Students not systematic in exploration or in generalizing observed behavior </li></ul><ul><ul><li>Hello  output based on length of input </li></ul></ul><ul><ul><li>Add  1-digit modulus 10 adder, input exception </li></ul></ul><ul><ul><li>Pay  pay calculation with upper bound pay amount </li></ul></ul>
    25. 25. Blind Testing II <ul><li>Programming Objective: Student writes program that matches the observed behavior of Blind Testing I </li></ul><ul><li>Test Objective: Observations on Blind Testing I used as “test cases” for reverse-engineered program. </li></ul><ul><li>Results: Students did not see the connection; </li></ul><ul><ul><li>Did not replicate the recorded behavior </li></ul></ul><ul><ul><li>Did not recognize (via testing) failure to replicate </li></ul></ul>
    26. 26. SUPPLEMENTAL SLIDES <ul><li>Student work </li></ul>
    27. 27. SCALING UP The heart of the approach is to use a decision table as a thinking tool . The most critical task in this process is to identify all the stimuli and responses. When there are many logical combinations of stimuli, the decision table can become large, indicating that the unit is complex and hard to test .
    28. 28. IDENTIFYING BEHAVIOR Approaches <ul><ul><li>Work backwards </li></ul></ul><ul><ul><ul><li>Identify each response </li></ul></ul></ul><ul><ul><ul><li>Identify conditions that provoke response </li></ul></ul></ul><ul><ul><ul><li>Identify separate stimuli </li></ul></ul></ul><ul><ul><li>Work forward </li></ul></ul><ul><ul><ul><li>Identify stimuli </li></ul></ul></ul><ul><ul><ul><li>Identify how each stimulus influences what unit does </li></ul></ul></ul><ul><ul><ul><li>Specify the response </li></ul></ul></ul>
    29. 29. IDENTIFYING STIMULI <ul><ul><li>Arguments passed upon invocation </li></ul></ul><ul><ul><li>Interactive user inputs </li></ul></ul><ul><ul><li>Internal, secondary data </li></ul></ul><ul><ul><ul><li>global or class variables </li></ul></ul></ul><ul><ul><li>External data (sources) </li></ul></ul><ul><ul><ul><li>file or database status variables </li></ul></ul></ul><ul><ul><ul><li>file or database data </li></ul></ul></ul><ul><ul><li>Exceptions </li></ul></ul>
    30. 30. IT PAYS TO BE A GOOD STIMULUS DETECTIVE <ul><ul><li>Failure to identify stimuli results in an incomplete, possibly misleading test case </li></ul></ul><ul><ul><li>The search for stimuli exposes </li></ul></ul><ul><ul><ul><li>interface assumptions -- a major source of integration problems </li></ul></ul></ul><ul><ul><ul><li>incomplete design of unit </li></ul></ul></ul><ul><ul><ul><li>inadequate provision for exception handling </li></ul></ul></ul>
    31. 31. IDENTIFYING RESPONSES <ul><ul><li>Arguments/Results passed back on exit </li></ul></ul><ul><ul><li>Interactive user outputs </li></ul></ul><ul><ul><li>Internal, secondary data </li></ul></ul><ul><ul><ul><li>updated global or class variables </li></ul></ul></ul><ul><ul><li>External data (sinks) </li></ul></ul><ul><ul><ul><li>output file or database status variables </li></ul></ul></ul><ul><ul><ul><li>output file or database data </li></ul></ul></ul><ul><ul><li>Exceptions </li></ul></ul>
    32. 32. IT PAYS TO BE A GOOD RESPONSE DETECTIVE <ul><ul><li>Failure to identify responses results in </li></ul></ul><ul><ul><ul><li>incomplete understanding of the software under test </li></ul></ul></ul><ul><ul><ul><li>shallow test cases </li></ul></ul></ul><ul><ul><ul><li>incomplete expected results </li></ul></ul></ul><ul><ul><ul><li>incomplete test &quot;success&quot; verification -- certain effects not checked </li></ul></ul></ul><ul><ul><li>To test, one must know all the effects </li></ul></ul>
    33. 33. A SKETCHING TOOL Black-Box Schematic Stimulus Type Response Type Software under Test Argument Inputs Globals Database Exception Argument Outputs Globals Database Exception
    34. 34. BEFORE CONTINUTING Much of the discussion so far involves how to identify what software does . We have introduced thinking tools for systematically capturing our findings. These thought processes and tools can be used anywhere in the lifecycle, e.g., in software design ! One Stone for Two Birds!!
    35. 35. Specialist I - Competencies Practitioner Test Designer Test Analyst Test Inspector Test Environmentalist Test SPECIALIST 1 Test Builder 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3 3 3 3 3 3 4 4 4 4 4 4 4 5 5 5 5 5 5 5 Test Practitioner ... ... ... ... ... ... ...
    36. 36. BOUNDARY TESTING DESIGN METHODOLOGY <ul><ul><li>Specification </li></ul></ul><ul><ul><li>Identify elementary boundary conditions </li></ul></ul><ul><ul><li>Identify boundary points </li></ul></ul><ul><ul><li>Generate boundary test cases </li></ul></ul><ul><ul><li>Update test script (add boundary cases). </li></ul></ul>
    37. 37. EXAMPLE: Pay Calculation (1) Specification <ul><ul><li>Compute pay for employee, given the number of hours worked and the hourly pay rate. For hourly employees (rate < 30), compute overtime at 1.5 times hourly rate for hours in excess of 40. Salaried employees (rate >= 30) are paid for exactly 40 hours. </li></ul></ul>
    38. 38. EXAMPLE B (2) Identify Behaviors <ul><ul><li>Case 1: Hourly AND No overtime </li></ul></ul><ul><ul><ul><li>(Rate < 30) & (Hours <= 40) </li></ul></ul></ul><ul><ul><ul><li>Expect Pay = Hours * Rate </li></ul></ul></ul><ul><ul><li>Case 2: Hourly AND Overtime </li></ul></ul><ul><ul><ul><li>(Rate < 30) & (Hours > 40) </li></ul></ul></ul><ul><ul><ul><li>Expect Pay = 40*Rate+1.5*Rate*(Hours - 40) </li></ul></ul></ul><ul><ul><li>Case 3: Salaried (Rate >= 30) </li></ul></ul><ul><ul><ul><li>Expect Pay = 40 * Rate </li></ul></ul></ul>
    39. 39. DECISION TABLE Columns define Behaviors Condition c1: Rate < 30 | Y Y N N c2: Hours <= 40 | Y N Y N Action a1: Pay = Straight time | X a2: Pay = Overtime | X a3: Pay = Professional | X X
    40. 40. EXAMPLE B (3) Create Test Cases <ul><ul><li>One test case per column of decision table </li></ul></ul><ul><ul><ul><li>Case 1: Hourly, No Overtime </li></ul></ul></ul><ul><ul><ul><li>Case 2: Hourly, Overtime </li></ul></ul></ul><ul><ul><ul><li>Case 3: Salaried, No Extra Hours </li></ul></ul></ul><ul><ul><ul><li>Case 4: Salaried, Extra Hours </li></ul></ul></ul><ul><ul><li>Order the test cases by column </li></ul></ul>
    41. 41. EXAMPLE B (4) Write Test Script Step Stimuli Expected Response Hours Rate Pay = 1 2 30 50 10 10 300 550 3 30 40 1600 4 50 40 1600
    42. 42. Testing Modules -- Drivers A test driver executes a unit with test case data and captures the results. Driver Test set Data External Effects Unit Arguments Results Test Set Results
    43. 43. Implementing Test Drivers <ul><li>Complexity </li></ul><ul><ul><li>Arguments/Results only </li></ul></ul><ul><ul><li>Special set-up required to execute unit </li></ul></ul><ul><ul><li>External effects capture/inquiry </li></ul></ul><ul><ul><li>Oracle announcing &quot;PASS&quot;/&quot;FAIL&quot; </li></ul></ul><ul><li>Major Benefits </li></ul><ul><ul><li>Automated, repeatable test script </li></ul></ul><ul><ul><li>Documented evidence of testing </li></ul></ul><ul><ul><li>Universal design pattern </li></ul></ul>
    44. 44. Test Driver for Unit Pay Driver D_pay uses unit_environment E; { declare Hrs, Rate, expected; testcase_no = 0; open tdi_file(&quot;tdi-pay.txt&quot;); open trs_file(&quot;trs-pay.txt&quot;); while (more data in tdi_file) { read(tdi_file, Hrs, Rate); read(tdi_file, expected); testresult = pay(Hrs, Rate); write (trs_file, testcase_no++, Hrs, Rate, expected, testresult); } //while close tdi_file, trs_file; } //driver
    45. 45. Test Driver Files (Pay) Test Data File File name: tdi-pay.txt Format: (test cases only) rate hours expected-pay File content: 10 40 400 10 50 550 10 0 0 ------ Note: No environment setup. Test Results File File name: trs-pay.txt Format: case# rate hours exp-pay act-pay File content: 1 10 40 400 400 2 10 50 550 500 3 10 0 0 0 ------ Note: Results file must be inspected for failures . Pass Fail Pass Test Script!!
    46. 46. Testing Classes -- Drivers (Black-Box) Class Test Driver Method Args /Results Test set Data Test Set Results Class Class-state Method(s)
    47. 47. Example -- Stack Class class Stack { public: Stack(); void push(int n); int pop(); int top(); bool empty(); private: int Size; int Top; int Values[100]; }; Notes: (1) Class state -- variables Size, Top and the first 'Size' values in array Values. (2) Methods push and pop modify class state; top and empty inquire about the state. (3) Stack does not require any test environment of its own. (4) Class state HIDDEN from test, i.e., black box.
    48. 48. Test Driver Files (Stack class) Test Data File (tdi-stack.txt) File content: ----- 1 8 1 7 3 7 2 7 2 8 4 true ------ Note: No test environment setup. Methods: 1-push, 2-pop, 3-top, 4-empty Test Results File (trs-stack.txt) File content: ----- 1 1 8 2 1 7 3 3 7 8 4 2 7 8 5 2 8 7 6 4 1 ------ Note: Results file must be inspected for pass/fails . --- Top should be 7 . --- Push . --- Pop, should be 8 . --- Stack should be empty. Fail Fail Fail Pass

    ×