Quality assurance


Published on

1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Ant-trail metaphor: Models nudge you in certain directions while you're taking actions or deciding what actions to take. They're like the pheromone trails that ants lay down: they don't necessarily represent the best way to acquire food, but they do nudge individual ants to more-or-less do the same thing as other ants with the same goal are doing. You can always deviate from the model because of local context - just like ants can randomly leave the trail - but doing so pushes against constraints. You are more likely to stay on the trail.
  • Quality assurance

    1. 1. T-76.(4/5)115 Software Project Quality Practices in Course Projects 18.10.2003 Juha Itkonen SoberIT
    2. 2. Contents <ul><li>Testing as a part of incremental development </li></ul><ul><li>Exploratory peer testing approach </li></ul><ul><li>Test planning </li></ul><ul><li>Test reporting </li></ul><ul><li>Designing and managing test cases </li></ul>
    3. 3. Quality practices as part of incremental development
    4. 4. Quality practices are an integral part of sw development <ul><li>Often, testing is seen as some separate, last phase of software development process </li></ul><ul><ul><li>That can be outsourced to separate testing team </li></ul></ul><ul><ul><li>That only deeds to be done just before the release – if there is any time </li></ul></ul><ul><li>Quality practices can not be separated from rest of the software development </li></ul><ul><ul><li>Testing has to be involved from the beginning </li></ul></ul><ul><ul><li>Testers can, and should, contribute in each phase of the software development life-cycle </li></ul></ul><ul><ul><li>QA is much more than the final acceptance testing phase </li></ul></ul>
    5. 5. The V-model of testing <ul><li>V-model is an extension of the waterfall model </li></ul><ul><li>You can imagine a little V-model inside each iteration </li></ul><ul><ul><li>However, you might want to be more iterative on iteration level, too. </li></ul></ul><ul><li>Do not take the V-model as a process for the whole project </li></ul>
    6. 6. Two extremes of organizing testing Waterfall model Agile models (XP) Leading idea: Testing in collaboration Coders Testers Customer Coder Coder Tester
    7. 7. Strive for more agile approach on this course <ul><li>You have fixed </li></ul><ul><ul><li>Schedule </li></ul></ul><ul><ul><li>Resources </li></ul></ul><ul><li>Flexibility is in scope and quality </li></ul><ul><ul><li>Quality won’t appear without planning and explicit actions </li></ul></ul><ul><li>You don’t have separate testing resources </li></ul><ul><li>You probably don’t have comprehensive documentation </li></ul><ul><li>You probably have more or less ambiguity and instability in your requirements </li></ul><ul><li>You don’t have too much effort to spend </li></ul><ul><li>You have big risks </li></ul>
    8. 8. Execute tests incrementally <ul><li>Each iteration delivers tested software </li></ul><ul><li>Don’t plan test execution as a separate phase after development </li></ul><ul><li>Unit tests are executed as a part of coding activity </li></ul><ul><ul><li>Test driven development </li></ul></ul><ul><li>Functional system tests can be designed and executed simultaneously with implementation </li></ul><ul><ul><li>Enables fast feedback </li></ul></ul><ul><li>Remember tracking </li></ul><ul><ul><li>What was tested </li></ul></ul><ul><ul><li>What version and environment </li></ul></ul><ul><ul><li>When it was tested </li></ul></ul><ul><ul><li>By whom </li></ul></ul><ul><ul><li>What were the results </li></ul></ul><ul><ul><li>How and for what purpose do you use the results? </li></ul></ul>
    9. 9. Involve the customer early <ul><li>Take the customer with you to specify or review the test cases </li></ul><ul><ul><li>The customer plays the oracle role </li></ul></ul><ul><li>Give the customer the opportunity to </li></ul><ul><ul><li>Execute pre-specified or exploratory tests </li></ul></ul><ul><ul><li>Play around with the system </li></ul></ul><ul><ul><ul><li>Before the FD phase </li></ul></ul></ul>
    10. 10. Peer testing with exploratory approach
    11. 11. <ul><li>Testing without predefined test cases </li></ul><ul><li>Manual testing </li></ul><ul><ul><li>Based on experience, knowledge and skills of the tester </li></ul></ul><ul><ul><li>Without pre-documented test steps (detailed test cases) </li></ul></ul><ul><li>Exploring the software or system </li></ul><ul><ul><li>Goal is to expose quality-related information </li></ul></ul><ul><ul><li>Continually adjusting plans, re-focusing on the most promising risk areas </li></ul></ul><ul><ul><li>Following hunches </li></ul></ul><ul><li>Minimizing time spent on (pre)documentation </li></ul>Exploratory Testing (ET) is
    12. 12. Exploratory Testing is not a technique <ul><li>It is an approach </li></ul><ul><li>Many testing techniques can be used in exploratory way </li></ul><ul><li>Exploratory testing vs. scripted testing are the ends of a continuum </li></ul>Freestyle exploratory “bug hunting” Pure scripted (automated) testing Vague scripts Fragmentary test cases Chartered exploratory testing
    13. 13. Definition of Exploratory Testing <ul><li>Tests are not defined in advance as detailed test scripts or test cases. </li></ul><ul><ul><li>Instead, exploratory testing is exploration with a general mission without specific step-by-step instructions on how to accomplish the mission. </li></ul></ul><ul><li>Exploratory testing is guided by the results of previously performed tests and the gained knowledge from them. </li></ul><ul><ul><li>An exploratory tester uses any available information of the target of testing, for example a requirements document, a user’s manual, or even a marketing brochure. </li></ul></ul><ul><li>The focus in exploratory testing is on finding defects by exploration </li></ul><ul><ul><li>Instead of systematically producing a comprehensive set of test cases for later use. </li></ul></ul><ul><li>Exploratory testing is simultaneous learning of the system under test, test design, and test execution. </li></ul><ul><li>The effectiveness of the testing relies on the tester’s knowledge, skills, and experience. </li></ul>
    14. 14. Scripted vs. Exploratory Testing <ul><li>In scripted testing, tests are first designed and recorded. Then they may be executed at some later time or by a different tester. </li></ul><ul><li>In exploratory testing, tests are designed and executed at the same time, and they often are not recorded. </li></ul><ul><ul><li>You build a mental model of the product while you test it. This model includes what the product is and how it behaves, and how it’s supposed to behave </li></ul></ul>Tests Product Tests James Bach, Rapid Software Testing, 2002
    15. 15. Exploratory Function Testing <ul><li>Use list of functions to give structure and high level guide to your testing </li></ul><ul><ul><li>Requirements specification </li></ul></ul><ul><ul><li>Functional specification </li></ul></ul><ul><ul><li>User manual </li></ul></ul><ul><li>Explore creatively each individual function and interactions of functions </li></ul><ul><ul><li>Cover side paths, interesting and suspicious areas </li></ul></ul><ul><ul><ul><li>Exceptional inputs, error situations </li></ul></ul></ul><ul><ul><li>Utilize the information gained during the testing </li></ul></ul><ul><ul><ul><li>Simultaneous learning </li></ul></ul></ul><ul><ul><li>Tests are designed simultaneously with test execution </li></ul></ul><ul><ul><li>Use the list of functions to get back on track </li></ul></ul><ul><li>Coverage and progress is planned and tracked by functions </li></ul><ul><ul><li>Not by test cases </li></ul></ul>
    16. 16. Session Based Test Management A method for managing ET <ul><li>Charter </li></ul><ul><li>Time Box </li></ul><ul><li>Reviewable Result </li></ul><ul><li>Debriefing </li></ul>
    17. 17. Charter <ul><li>Architecting the Charters is test planning </li></ul><ul><li>Brief information / guidelines on: </li></ul><ul><ul><li>What should be tested? </li></ul></ul><ul><ul><ul><li>Areas, components, features, … </li></ul></ul></ul><ul><ul><li>Why do we test this? </li></ul></ul><ul><ul><ul><li>goals </li></ul></ul></ul><ul><ul><li>How to test (approach)? </li></ul></ul><ul><ul><ul><li>Specific techniques or tactics to be used </li></ul></ul></ul><ul><ul><ul><li>Test data </li></ul></ul></ul><ul><ul><li>What problems to look for? </li></ul></ul><ul><li>Might include guidelines on: </li></ul><ul><ul><li>Tools to use </li></ul></ul><ul><ul><li>What risks are involved </li></ul></ul><ul><ul><li>Documents to examine </li></ul></ul><ul><ul><li>Desired output from the testing </li></ul></ul>
    18. 18. Time Box <ul><li>Focused test effort of fixed duration </li></ul><ul><li>Brief enough for accurate reporting </li></ul><ul><li>Brief enough to allow flexible scheduling </li></ul><ul><li>Brief enough to allow course correction </li></ul><ul><li>Long enough to get solid testing done </li></ul><ul><li>Long enough for efficient debriefings </li></ul><ul><li>Beware of overly precise timing </li></ul><ul><ul><li>Short: 60 minutes (+-15) </li></ul></ul><ul><ul><li>Normal: 90 minutes (+-15) </li></ul></ul><ul><ul><li>Long: 120 minutes (+-15) </li></ul></ul>
    19. 19. Reviewable results <ul><li>Charter </li></ul><ul><li>Effort Breakdown </li></ul><ul><ul><li>Duration (hours:minutes) </li></ul></ul><ul><ul><li>Test design and execution (percent) </li></ul></ul><ul><ul><li>Bug investigation and reporting (percent) </li></ul></ul><ul><ul><li>Session setup (percent) </li></ul></ul><ul><ul><li>Charter / opportunity (percent/percent) </li></ul></ul><ul><li>Data Files </li></ul><ul><li>Test Notes </li></ul><ul><li>Bugs </li></ul><ul><li>Issues </li></ul>
    20. 20. Debriefing <ul><li>The test lead reviews session sheet to assure that he understands it and that it follows the protocol </li></ul><ul><li>The tester answers any questions </li></ul><ul><li>Session metrics are checked </li></ul><ul><li>Charter may be adjusted </li></ul><ul><li>Session may be extended </li></ul><ul><li>New sessions may be chartered </li></ul><ul><li>Coaching / Mentoring happens </li></ul>
    21. 21. Peer Testing in I2 iteration <ul><li>Peer group pairs are on course web pages </li></ul><ul><li>Plan and prepare for peer testing already before I2 </li></ul><ul><ul><li>Delivering and installing the system </li></ul></ul><ul><ul><li>Meetings (preparation and debriefing) </li></ul></ul><ul><ul><li>Agreeing on total effort </li></ul></ul><ul><li>17.2.2005 - Hand-off the system to the peer group </li></ul><ul><ul><li>The system under test </li></ul></ul><ul><ul><li>All relevant documentation </li></ul></ul><ul><ul><ul><li>User and installation manual </li></ul></ul></ul><ul><ul><ul><li>Known bugs, bug reporting guidelines </li></ul></ul></ul><ul><ul><li>Test Charter (at least 2 charters) </li></ul></ul><ul><ul><ul><li>one general charter, provided by course </li></ul></ul></ul><ul><ul><ul><li>and at least 1 from the group whose system is tested </li></ul></ul></ul><ul><li>Peer testing execution </li></ul><ul><li>21.2.2005 - Peer testing reports delivered to the other group </li></ul><ul><ul><li>Agree this with your peer group </li></ul></ul>
    22. 22. Peer test reporting <ul><li>Iteration I2 peer test deliverables </li></ul><ul><ul><li>Peer test reports and session logs x 2 (Own and peer group’s report) </li></ul></ul><ul><ul><li>Defect reports directly into bug tracking system </li></ul></ul><ul><ul><ul><li>Peer testing defect reports into the other group’s system </li></ul></ul></ul><ul><ul><ul><li>Bug summary listing as an appendix in the test report </li></ul></ul></ul><ul><ul><li>In the final report you should assess peer group’s testing efforts and results </li></ul></ul>
    23. 23. Test Planning
    24. 24. Checklist for test planning <ul><li>Overall test objectives (why) </li></ul><ul><li>What will and won’t be tested (what) </li></ul><ul><li>Test approach (how) </li></ul><ul><ul><li>Test phases </li></ul></ul><ul><ul><li>Test strategy, methods, techniques, … </li></ul></ul><ul><ul><li>Metrics and statistics </li></ul></ul><ul><li>Resource requirements (who) </li></ul><ul><ul><li>Tester assignments and responsibilities </li></ul></ul><ul><ul><li>Test environments </li></ul></ul><ul><li>Test tasks and schedule (when) </li></ul><ul><li>Risks and issues </li></ul>
    25. 25. Overall test objectives (why) <ul><li>The quality goals of the project </li></ul><ul><li>What is to be achieved by the quality practices? </li></ul><ul><li>and what are the most important qualities and risks for this product? </li></ul><ul><li>Why are we testing? </li></ul><ul><li>This course </li></ul><ul><ul><li>Plan and document your quality goals in project plan chapter 5.2.1 </li></ul></ul><ul><ul><li>Metrics that are used to evaluate the quality of the results in the end of each iteration </li></ul></ul><ul><ul><ul><li>Plan and document in project plan chapter 5.2.1 </li></ul></ul></ul><ul><ul><ul><li>Should be visible in project plan chapter 6. </li></ul></ul></ul>
    26. 26. What will and won’t be tested (scope) <ul><li>Identify components and features of the software under test </li></ul><ul><ul><li>High-enough abstraction level </li></ul></ul><ul><ul><li>Prioritize </li></ul></ul><ul><li>Both functional and non-functional aspects </li></ul><ul><li>Consider time, resources and risks </li></ul><ul><ul><li>Everything can’t be tested and everything that is tested can’t be tested thoroughly </li></ul></ul><ul><li>Identify separately components and features that are not tested </li></ul><ul><li>This course </li></ul><ul><ul><li>Document in project plan chapter 5.2.2 </li></ul></ul><ul><ul><li>For each iteration </li></ul></ul>
    27. 27. Test case organization and tracking <ul><li>Prioritizing tests </li></ul><ul><ul><li>The most severe failures </li></ul></ul><ul><ul><li>The most likely faults </li></ul></ul><ul><ul><li>Priorities of use cases </li></ul></ul><ul><ul><ul><li>End-user prioritizing the requirements </li></ul></ul></ul><ul><ul><li>Most faults in the past </li></ul></ul><ul><ul><li>Most complex or critical </li></ul></ul><ul><ul><li>Positive / negative </li></ul></ul><ul><ul><li>… </li></ul></ul><ul><li>Create test suites </li></ul><ul><ul><li>Test-to-Pass (Positive testing) </li></ul></ul><ul><ul><li>Test-to-Fail (Negative testing) </li></ul></ul><ul><ul><li>Smoke test suite </li></ul></ul><ul><ul><li>Regression test suite </li></ul></ul><ul><ul><li>Functional suites </li></ul></ul><ul><ul><li>Different platforms </li></ul></ul><ul><ul><li>Priorities </li></ul></ul><ul><ul><li>… </li></ul></ul>
    28. 28. Test approach (how) <ul><li>How testing is performed in general and in each iteration </li></ul><ul><ul><li>Levels of testing </li></ul></ul><ul><ul><li>Test techniques </li></ul></ul><ul><ul><ul><li>Functional, non-functional </li></ul></ul></ul><ul><ul><ul><li>Methods and techniques </li></ul></ul></ul><ul><ul><ul><li>Tools and automation </li></ul></ul></ul><ul><ul><ul><li>exploratory testing </li></ul></ul></ul><ul><ul><ul><ul><li>Used in peer testing (use also to supplement the planned tests) </li></ul></ul></ul></ul><ul><li>What other QA activities are used and how </li></ul><ul><ul><li>document/code reviews or inspections </li></ul></ul><ul><ul><li>coding standard </li></ul></ul><ul><ul><li>collecting continuous feedback from the customer </li></ul></ul><ul><li>Reporting and defect management procedures </li></ul><ul><ul><li>how the testing results are utilized and the feedback provided to steering the project </li></ul></ul><ul><li>Scope of test documentation </li></ul><ul><ul><li>On what level and how test cases are documented </li></ul></ul><ul><ul><li>What other test documentation is produced </li></ul></ul><ul><li>This course </li></ul><ul><ul><li>Plan the approach and document in the project plan </li></ul></ul><ul><ul><li>General approach in chapter 5.2.1 </li></ul></ul><ul><ul><li>Details for each iteration in chapters 5.2.2 </li></ul></ul>
    29. 29. Resource requirements (who) <ul><li>People </li></ul><ul><ul><li>How many, what expertise </li></ul></ul><ul><ul><li>Responsibilities </li></ul></ul><ul><li>Equipment </li></ul><ul><ul><li>Computers, test hardware, printers, tools. </li></ul></ul><ul><li>Office and lab space </li></ul><ul><ul><li>Where will they be located? How big will they be? How will they be arranged? </li></ul></ul><ul><li>Tools and documents </li></ul><ul><ul><li>Word processors, databases, custom tools. What will be purchased, what needs to be written? </li></ul></ul><ul><li>Miscellaneous supplies </li></ul><ul><ul><li>Disks, phones, reference books, training material. Whatever else might be needed over the course of the project. </li></ul></ul><ul><li>This course </li></ul><ul><ul><li>Document in the project plan </li></ul></ul><ul><li>Define responsibilities </li></ul><ul><li>Identify limited / critical resources </li></ul><ul><li>Location and availability </li></ul>
    30. 30. Test environments <ul><li>Identification of test environments </li></ul><ul><ul><li>Hardware, software, os, network, ... </li></ul></ul><ul><li>Prioritization and focusing test suites on each </li></ul><ul><ul><li>Number of combinations can be huge </li></ul></ul><ul><ul><li>Regression testing in different environments </li></ul></ul><ul><li>Scheduling implications </li></ul><ul><li>Test lab </li></ul><ul><ul><li>Different hardware and software platforms </li></ul></ul><ul><ul><li>Cleaning the machines </li></ul></ul><ul><ul><li>Setting up the test data </li></ul></ul><ul><ul><li>Moving from platform to another </li></ul></ul><ul><ul><li>People vs. hardware needs </li></ul></ul><ul><li>This course </li></ul><ul><ul><li>Plan carefully what is a realistic goal for testing in different environments </li></ul></ul><ul><ul><ul><li>Quality goals of the project </li></ul></ul></ul><ul><ul><li>Prioritize </li></ul></ul><ul><ul><li>Document your choices in the test plan </li></ul></ul>
    31. 31. Testing tasks and schedule (who) <ul><li>Work Breakdown Structure (WBS) </li></ul><ul><ul><li>Areas of the software </li></ul></ul><ul><ul><li>Testable features </li></ul></ul><ul><ul><li>Assigning responsibilities </li></ul></ul><ul><li>Mapping testing to overall project schedule </li></ul><ul><li>Both duration and effort </li></ul><ul><li>Build schedule </li></ul><ul><ul><li>Number of test cycles </li></ul></ul><ul><ul><li>Regression tests </li></ul></ul><ul><li>Releases </li></ul><ul><ul><li>External links, i.e. Beta testing </li></ul></ul><ul><li>Consider using relative dates </li></ul><ul><li>This course </li></ul><ul><ul><li>Document in the project plan </li></ul></ul><ul><ul><li>If you are going to do e.g. usability testing and performance testing or code reviews there should be corresponding tasks in the project schedule </li></ul></ul>
    32. 32. QA planning during iteration I1 planning DL 31.10. <ul><li>Project level </li></ul><ul><li>Identify quality goals </li></ul><ul><li>Plan QA approach (strategy) </li></ul><ul><ul><li>How to achieve the goals </li></ul></ul><ul><ul><li>Document in project plan chapter 5.2 </li></ul></ul><ul><li>Plan test environments and tools </li></ul><ul><ul><li>Document in project plan chapter 5.3 </li></ul></ul><ul><li>Plan test case organization and tracking </li></ul><ul><li>Deliverables and metrics </li></ul><ul><li>How the results are used </li></ul><ul><ul><li>For what purpose </li></ul></ul><ul><li>Iteration level </li></ul><ul><li>What will be tested </li></ul><ul><ul><li>Features, quality attributes </li></ul></ul><ul><ul><li>What won’t be tested </li></ul></ul><ul><li>Details of the QA approach </li></ul><ul><ul><li>What QA practices are used </li></ul></ul><ul><ul><li>How practices are used </li></ul></ul><ul><ul><li>Priorities of testing </li></ul></ul><ul><li>Testing rounds </li></ul><ul><ul><li>i.e., how many times and when certain tests are executed </li></ul></ul><ul><li>Tasks and schedule </li></ul><ul><ul><li>Resources </li></ul></ul><ul><ul><li>Responsibilities </li></ul></ul><ul><ul><li>Test deliverables </li></ul></ul><ul><li>Document in the project plan chapter 6. </li></ul>You have less than 2 weeks to do project level and I1 QA planning!
    33. 33. Test Reporting
    34. 34. Defect tracking and reporting <ul><li>Why defect tracking </li></ul><ul><ul><li>You don’t forget found defects </li></ul></ul><ul><ul><li>You get metrics </li></ul></ul><ul><li>Think what bugs are reported and when </li></ul><ul><ul><li>During coding? </li></ul></ul><ul><ul><li>After inspection? </li></ul></ul><ul><ul><li>Not before system testing? </li></ul></ul><ul><li>Bug lifecycle </li></ul><ul><ul><li>When and how bugs are managed </li></ul></ul><ul><ul><li>When and what bugs are fixed </li></ul></ul><ul><ul><ul><li>Who decides, when and how </li></ul></ul></ul><ul><li>Use Bugzilla or some other defect tracking system </li></ul><ul><ul><li>Bugzilla provided by the course </li></ul></ul><ul><li>Document your choices in project plan chapter 5.2 </li></ul>
    35. 35. Bug metrics <ul><li>Description of severe bugs found and open </li></ul><ul><ul><li>other QA metrics </li></ul></ul><ul><ul><ul><li>unit test coverage </li></ul></ul></ul><ul><ul><ul><li>code reviews </li></ul></ul></ul><ul><ul><ul><li>source code metrics </li></ul></ul></ul><ul><ul><ul><li>… </li></ul></ul></ul>50 45 5 Closed 35 5 Open 85 75 10 Reported Total I2 I1 10 5 Major 35 17 10 2 1 Total open 75 49 15 1 0 This iteration reported Total Trivial Minor Critical Block
    36. 36. Quality assessment 1/2 <ul><li>Max 10-20 functional areas </li></ul><ul><li>Testers’ assessment of the current quality status of the system </li></ul><ul><li>You can plan your own qualitative scales </li></ul>Legend Coverage: 0 = nothing 1 = we looked at it 2 = we checked all functions 3 = it’s tested Quality:  = quality is good  = not sure  = quality is bad Only few minor defects found, very efficient implementation.  2 File conversions Nothing serious yet  1 Admin tools 2 critical bugs found during last test round, lot of small problems  3 Encoder Not started  0 GUI editor Comments Quality Coverage Functional area
    37. 37. Quality assessment 2/2 <ul><li>Evaluate the quality of the different functional areas of the system </li></ul><ul><ul><li>how much effort has been put on test execution </li></ul></ul><ul><ul><li>what is the coverage of testing </li></ul></ul><ul><ul><li>what can you say about the quality of the particular component based on your test results and ’gut feeling’ during testing </li></ul></ul><ul><ul><li>e.g. is the number of reported bugs low because of lack of testing or high because of intensive testing </li></ul></ul><ul><li>Assess the quality status of the system against the quality goals of the project </li></ul>
    38. 38. Test report and log <ul><li>Test report template provided </li></ul><ul><ul><li>Summary of testing tasks and results </li></ul></ul><ul><ul><ul><li>No detailed lists of passed and failed test cases </li></ul></ul></ul><ul><ul><li>Includes evaluation of the quality </li></ul></ul><ul><li>Test log </li></ul><ul><ul><li>Provides a chronological record of relevant details about the execution of tests </li></ul></ul><ul><ul><li>Who tested, when and what (version, revision, environment, etc.) </li></ul></ul><ul><ul><li>Lists all executed test cases </li></ul></ul><ul><ul><li>Results, remarks, bugs and issues of each test case </li></ul></ul><ul><ul><li>Execution date&time, used data files, etc. </li></ul></ul><ul><ul><li>See TestCaseMatrix.xls, for example. </li></ul></ul>
    39. 39. Test Case Design
    40. 40. Deriving test cases from use cases <ul><li>If the functional requirements are modelled as use cases it is sensible to utilize them in functional testing </li></ul><ul><li>Use case != test case </li></ul><ul><ul><li>Testing is interested in the uncommon and abnormal scenarios </li></ul></ul><ul><ul><li>One use case leads to several test cases </li></ul></ul><ul><li>Prioritize use cases and use this prioritization when prioritizing tests </li></ul><ul><ul><li>Prioritization in testing is the distribution of efforts </li></ul></ul><ul><ul><li>(Not the order of execution) </li></ul></ul><ul><li>Maintain traceability between use cases and test cases </li></ul><ul><li>Use cases are not complete specifications </li></ul><ul><ul><li>Testing only the conditions that are mentioned in use case is usually not enough </li></ul></ul><ul><li>See Robert V. Binder’s “Extended Use Case Test Design Pattern” http://www.rbsc.com/docs/TestPatternXUC.pdf </li></ul>
    41. 41. Use case example: use case <ul><li>User slides a card through the card-reader </li></ul><ul><li>Card-reader scans employee ID from card </li></ul><ul><ul><li>Exception 1: Card can’t be read </li></ul></ul><ul><ul><ul><li>Log event </li></ul></ul></ul><ul><ul><ul><li>Use case ends </li></ul></ul></ul><ul><li>System validates employee access </li></ul><ul><ul><li>Exception 2: Employee ID is invalid </li></ul></ul><ul><ul><ul><li>Log event </li></ul></ul></ul><ul><ul><ul><li>Use case ends </li></ul></ul></ul><ul><li>System unlocks door for configured time period </li></ul><ul><ul><li>Exception 3: System unable to unlock door </li></ul></ul><ul><ul><ul><li>Log event </li></ul></ul></ul><ul><ul><ul><li>Use case ends </li></ul></ul></ul><ul><li>User opens door </li></ul><ul><ul><li>Exception 4: Door is not opened </li></ul></ul><ul><ul><ul><li>System waits for timeout </li></ul></ul></ul><ul><ul><ul><li>System locks door </li></ul></ul></ul><ul><ul><ul><li>Use case ends </li></ul></ul></ul><ul><li>User enters and door shuts </li></ul><ul><ul><li>Exception 5: Door is not shut </li></ul></ul><ul><ul><ul><li>System waits for timeout </li></ul></ul></ul><ul><ul><ul><li>Log event </li></ul></ul></ul><ul><ul><ul><li>Set alarm condition </li></ul></ul></ul><ul><ul><ul><li>Use case ends </li></ul></ul></ul><ul><li>System locks door </li></ul><ul><ul><li>Exception 6: Door fails to lock </li></ul></ul><ul><ul><ul><li>System attempts to lock door </li></ul></ul></ul><ul><ul><ul><li>Log event </li></ul></ul></ul><ul><ul><ul><li>Set alarm condition </li></ul></ul></ul><ul><ul><ul><li>Use case ends </li></ul></ul></ul>
    42. 42. Use case example: test cases <ul><li>Test Case 1: Valid employee card is used </li></ul><ul><ul><li>Slide the card through the reader </li></ul></ul><ul><ul><li>Verify door is unlocked </li></ul></ul><ul><ul><li>Enter building </li></ul></ul><ul><ul><li>Verify door is locked </li></ul></ul><ul><li>Test Case 2: Card can’t be read </li></ul><ul><ul><li>Swipe a card that is not valid </li></ul></ul><ul><ul><li>Verify event is logged </li></ul></ul><ul><li>Test Case 3: Invalid employee ID </li></ul><ul><ul><li>Swipe card with invalid employee ID </li></ul></ul><ul><ul><li>Verify door is not unlocked </li></ul></ul><ul><ul><li>Verify event is logged </li></ul></ul><ul><li>Test Case 4: System unable to unlock door </li></ul><ul><ul><li>Swipe card </li></ul></ul><ul><ul><li>“ Injected” failure of unlocking mechanism </li></ul></ul><ul><ul><li>Verify event is logged </li></ul></ul><ul><li>Test Case 5: Door is not opened </li></ul><ul><ul><li>Swipe card </li></ul></ul><ul><ul><li>Verify door is unlocked </li></ul></ul><ul><ul><li>Don’t open the door and wait until timeout is exceeded </li></ul></ul><ul><ul><li>Verify door is locked </li></ul></ul><ul><li>Test Case 6: Door is not shut after entry </li></ul><ul><ul><li>Swipe card </li></ul></ul><ul><ul><li>Enter building </li></ul></ul><ul><ul><li>Hold door open until timeout is exceeded </li></ul></ul><ul><ul><li>Verify alarm is sounded </li></ul></ul><ul><ul><li>Verify event is logged </li></ul></ul><ul><li>Test Case 7: Door fails to lock </li></ul><ul><ul><li>Swipe card </li></ul></ul><ul><ul><li>Enter building </li></ul></ul><ul><ul><li>“ Injected” failure of locking mechanism </li></ul></ul><ul><ul><li>Verify alarm is sounded </li></ul></ul><ul><ul><li>Verify event is logged </li></ul></ul>
    43. 43. Error-guessing and ad hoc testing <ul><li>Always worth including </li></ul><ul><li>After systematic techniques have been used </li></ul><ul><li>Can find some faults that systematic techniques can miss </li></ul><ul><li>Supplements systematic techniques </li></ul><ul><li>Consider </li></ul><ul><ul><li>Past failures </li></ul></ul><ul><ul><li>Intuition </li></ul></ul><ul><ul><li>Experience </li></ul></ul><ul><ul><li>Brain storming </li></ul></ul><ul><ul><li>” What is the craziest thing we can do?” </li></ul></ul><ul><ul><li>Lists in literature, error catalogs </li></ul></ul>
    44. 44. Test Case Specification (IEEE Std 829) <ul><li>Test-case-specification identifier </li></ul><ul><li>Test items : describes the detailed feature, code module and so on to be tested. </li></ul><ul><li>Input specifications : specifies each input required to execute the test case (by value with tolerances or by name). </li></ul><ul><li>Output specifications : describes the result expected from executing the test case. Results may be outputs and features (for example, response time) required of the test items. </li></ul><ul><li>Environmental needs : Environmental needs are the hardware, software, test tools, facilities, staff, and so on to run the test case. </li></ul><ul><li>Special procedural requirements : describes any special constraints on the test procedures which execute this test case (special set-up, operator intervention, …). </li></ul><ul><li>Intercase dependencies : lists the identifiers of test cases which must be executed prior to this test case, describes the nature of the dependencies. </li></ul>
    45. 45. A simple approach to test cases <ul><li>The common details of a test suite and test procedure (like test environment) are documented elsewhere </li></ul><ul><ul><li>Avoiding copy paste </li></ul></ul><ul><ul><li>Test catalogs are utilized to describe common details of test cases </li></ul></ul><ul><ul><ul><li>test all available ways of performing the function (menu, keyboard, gui buttons, menu short-cut, short-cut keys, …) </li></ul></ul></ul><ul><ul><ul><li>Test settings or preferences that affect this function </li></ul></ul></ul>This may leave too much space for an inexperienced tester Req 12.34 <ul><li>Indenting the current line to the right, and left. </li></ul><ul><li>Indenting the selected lines to the right, and left. </li></ul><ul><ul><li>Moves the indentation, no aligning. </li></ul></ul>Indent functionality 2 TC-12.34.5 … Notes Description Test case title Priority Test case ID
    46. 46. Test Catalogs <ul><li>Test catalog is a list of typical tests for a certain situation </li></ul><ul><li>Based on experience on typical errors that developer make </li></ul>
    47. 47. Common pitfalls in test case definition <ul><li>Poor test case organization </li></ul><ul><ul><li>One big pile of test cases </li></ul></ul><ul><ul><li>Don’t know what a certain set of test cases actually tests or which cases test a certain functionality </li></ul></ul><ul><ul><li>Don’t know what was tested after testing </li></ul></ul><ul><li>Testing wrong things </li></ul><ul><ul><li>Prioritize and select the most important tests </li></ul></ul><ul><ul><li>Consider the test case’s probability to reveal an important fault </li></ul></ul><ul><li>Writing too detailed step-by-step scripts </li></ul><ul><ul><li>Not enough time for detailed scripting </li></ul></ul><ul><ul><li>Few detailed, but irrelevant test cases designed and executed -> bad quality of testing, no major defects found </li></ul></ul><ul><ul><li>Don’t program people </li></ul></ul>
    48. 48. Example – how to manage test cases TestCaseMatrix.xls When do you write these test cases? (Hint: not at the end of the project)
    49. 49. Well-timed test design <ul><li>Early test design </li></ul><ul><ul><li>test design finds faults </li></ul></ul><ul><ul><li>faults found early are cheaper to fix </li></ul></ul><ul><ul><li>most significant faults found first </li></ul></ul><ul><ul><li>faults prevented, not built in </li></ul></ul><ul><ul><li>no additional effort </li></ul></ul><ul><ul><ul><li>re-schedule test design </li></ul></ul></ul><ul><ul><li>test design causes requirement changes </li></ul></ul><ul><li>Not too early test case design </li></ul><ul><ul><li>Design tests in implementation order </li></ul></ul><ul><ul><ul><li>Start test design from the most completed and probable features </li></ul></ul></ul><ul><ul><ul><li>Test cases are designed during or after implementation, but incrementally </li></ul></ul></ul><ul><ul><li>Avoiding anticipatory test design and deprecated, incorrect test cases that are not based on the actual features </li></ul></ul><ul><ul><ul><li>If things change or specifications are not detailed enough for testing </li></ul></ul></ul><ul><ul><li>Test planning must begin early, test case design not necessarily </li></ul></ul>