T-76.(4/5)115 Software Project Quality Practices in Course Projects 18.10.2003 Juha Itkonen SoberIT
Contents Testing as a part of incremental development Exploratory peer testing approach
Designing and managing test cases
Quality practices as part of incremental development
Quality practices are an integral part of sw development Often, testing is seen as some separate, last phase of software development process That can be outsourced to separate testing team That only deeds to be done just before the release – if there is any time Quality practices can not be separated from rest of the software development Testing has to be involved from the beginning Testers can, and should, contribute in each phase of the software development life-cycle
QA is much more than the final acceptance testing phase
The V-model of testing V-model is an extension of the waterfall model You can imagine a little V-model inside each iteration However, you might want to be more iterative on iteration level, too.
Do not take the V-model as a process for the whole project
Two extremes of organizing testing Waterfall model Agile models (XP) Leading idea: Testing in collaboration Coders Testers Customer Coder Coder Tester
Strive for more agile approach on this course Flexibility is in scope and quality Quality won’t appear without planning and explicit actions You don’t have separate testing resources You probably don’t have comprehensive documentation You probably have more or less ambiguity and instability in your requirements
You don’t have too much effort to spend
Execute tests incrementally Each iteration delivers tested software Don’t plan test execution as a separate phase after development Unit tests are executed as a part of coding activity Functional system tests can be designed and executed simultaneously with implementation What version and environment
How and for what purpose do you use the results?
Involve the customer early Take the customer with you to specify or review the test cases The customer plays the oracle role Give the customer the opportunity to Execute pre-specified or exploratory tests
Play around with the system
Peer testing with exploratory approach
Testing without predefined test cases Based on experience, knowledge and skills of the tester Without pre-documented test steps (detailed test cases) Exploring the software or system Goal is to expose quality-related information Continually adjusting plans, re-focusing on the most promising risk areas Exploratory Testing (ET) is
Minimizing time spent on (pre)documentation
Exploratory Testing is not a technique Many testing techniques can be used in exploratory way Freestyle exploratory “bug hunting” Pure scripted (automated) testing Vague scripts Fragmentary test cases Chartered exploratory testing
Exploratory testing vs. scripted testing are the ends of a continuum
Definition of Exploratory Testing Tests are not defined in advance as detailed test scripts or test cases. Instead, exploratory testing is exploration with a general mission without specific step-by-step instructions on how to accomplish the mission. Exploratory testing is guided by the results of previously performed tests and the gained knowledge from them. An exploratory tester uses any available information of the target of testing, for example a requirements document, a user’s manual, or even a marketing brochure. The focus in exploratory testing is on finding defects by exploration Instead of systematically producing a comprehensive set of test cases for later use. Exploratory testing is simultaneous learning of the system under test, test design, and test execution.
The effectiveness of the testing relies on the tester’s knowledge, skills, and experience.
Scripted vs. Exploratory Testing In scripted testing, tests are first designed and recorded. Then they may be executed at some later time or by a different tester. In exploratory testing, tests are designed and executed at the same time, and they often are not recorded. Tests Product Tests James Bach, Rapid Software Testing, 2002
You build a mental model of the product while you test it. This model includes what the product is and how it behaves, and how it’s supposed to behave
Exploratory Function Testing Use list of functions to give structure and high level guide to your testing Requirements specification Explore creatively each individual function and interactions of functions Cover side paths, interesting and suspicious areas Exceptional inputs, error situations Utilize the information gained during the testing Tests are designed simultaneously with test execution Use the list of functions to get back on track
Coverage and progress is planned and tracked by functions
Session Based Test Management A method for managing ET
Charter Architecting the Charters is test planning Brief information / guidelines on: Areas, components, features, … Specific techniques or tactics to be used What problems to look for? Might include guidelines on:
Desired output from the testing
Time Box Focused test effort of fixed duration Brief enough for accurate reporting Brief enough to allow flexible scheduling Brief enough to allow course correction Long enough to get solid testing done Long enough for efficient debriefings Beware of overly precise timing
Normal: 90 minutes (+-15)
Reviewable results Test design and execution (percent) Bug investigation and reporting (percent)
Charter / opportunity (percent/percent)
Debriefing The test lead reviews session sheet to assure that he understands it and that it follows the protocol The tester answers any questions Session metrics are checked New sessions may be chartered
Coaching / Mentoring happens
Peer Testing in I2 iteration Peer group pairs are on course web pages Plan and prepare for peer testing already before I2 Delivering and installing the system Meetings (preparation and debriefing) 17.2.2005 - Hand-off the system to the peer group All relevant documentation User and installation manual Known bugs, bug reporting guidelines Test Charter (at least 2 charters) one general charter, provided by course and at least 1 from the group whose system is tested 21.2.2005 - Peer testing reports delivered to the other group
Agree this with your peer group
Peer test reporting Iteration I2 peer test deliverables Peer test reports and session logs x 2 (Own and peer group’s report) Defect reports directly into bug tracking system Peer testing defect reports into the other group’s system Bug summary listing as an appendix in the test report
In the final report you should assess peer group’s testing efforts and results
Checklist for test planning Overall test objectives (why) What will and won’t be tested (what) Test strategy, methods, techniques, … Resource requirements (who) Tester assignments and responsibilities
Test tasks and schedule (when)
Overall test objectives (why) The quality goals of the project What is to be achieved by the quality practices? and what are the most important qualities and risks for this product? Plan and document your quality goals in project plan chapter 5.2.1 Metrics that are used to evaluate the quality of the results in the end of each iteration Plan and document in project plan chapter 5.2.1
Should be visible in project plan chapter 6.
What will and won’t be tested (scope) Identify components and features of the software under test High-enough abstraction level Both functional and non-functional aspects Consider time, resources and risks Everything can’t be tested and everything that is tested can’t be tested thoroughly Identify separately components and features that are not tested
Document in project plan chapter 5.2.2
Test case organization and tracking End-user prioritizing the requirements Test-to-Pass (Positive testing)
Test-to-Fail (Negative testing)
Test approach (how) How testing is performed in general and in each iteration Functional, non-functional Used in peer testing (use also to supplement the planned tests) What other QA activities are used and how document/code reviews or inspections collecting continuous feedback from the customer Reporting and defect management procedures how the testing results are utilized and the feedback provided to steering the project Scope of test documentation On what level and how test cases are documented What other test documentation is produced Plan the approach and document in the project plan General approach in chapter 5.2.1
Details for each iteration in chapters 5.2.2
Resource requirements (who) Computers, test hardware, printers, tools. Where will they be located? How big will they be? How will they be arranged? Word processors, databases, custom tools. What will be purchased, what needs to be written? Disks, phones, reference books, training material. Whatever else might be needed over the course of the project. Document in the project plan Identify limited / critical resources
Location and availability
Test environments Identification of test environments Hardware, software, os, network, ... Prioritization and focusing test suites on each Number of combinations can be huge Regression testing in different environments Different hardware and software platforms Moving from platform to another People vs. hardware needs Plan carefully what is a realistic goal for testing in different environments Quality goals of the project
Document your choices in the test plan
Testing tasks and schedule (who) Work Breakdown Structure (WBS) Assigning responsibilities Mapping testing to overall project schedule External links, i.e. Beta testing Consider using relative dates Document in the project plan
If you are going to do e.g. usability testing and performance testing or code reviews there should be corresponding tasks in the project schedule
QA planning during iteration I1 planning DL 31.10. Plan QA approach (strategy) Document in project plan chapter 5.2 Plan test environments and tools Document in project plan chapter 5.3 Plan test case organization and tracking Features, quality attributes Details of the QA approach What QA practices are used i.e., how many times and when certain tests are executed You have less than 2 weeks to do project level and I1 QA planning!
Document in the project plan chapter 6.
Defect tracking and reporting You don’t forget found defects Think what bugs are reported and when Not before system testing? When and how bugs are managed When and what bugs are fixed Who decides, when and how Use Bugzilla or some other defect tracking system Bugzilla provided by the course
Document your choices in project plan chapter 5.2
Bug metrics 50 45 5 Closed 35 5 Open 85 75 10 Reported Total I2 I1 10 5 Major 35 17 10 2 1 Total open 75 49 15 1 0 This iteration reported Total Trivial Minor Critical Block
Description of severe bugs found and open
Quality assessment 1/2 Max 10-20 functional areas Testers’ assessment of the current quality status of the system Legend Coverage: 0 = nothing 1 = we looked at it 2 = we checked all functions 3 = it’s tested Quality: = quality is good = not sure = quality is bad Only few minor defects found, very efficient implementation. 2 File conversions Nothing serious yet 1 Admin tools 2 critical bugs found during last test round, lot of small problems 3 Encoder Not started 0 GUI editor Comments Quality Coverage Functional area
You can plan your own qualitative scales
Quality assessment 2/2 Evaluate the quality of the different functional areas of the system how much effort has been put on test execution what is the coverage of testing what can you say about the quality of the particular component based on your test results and ’gut feeling’ during testing e.g. is the number of reported bugs low because of lack of testing or high because of intensive testing
Assess the quality status of the system against the quality goals of the project
Test report and log Test report template provided Summary of testing tasks and results No detailed lists of passed and failed test cases Includes evaluation of the quality Provides a chronological record of relevant details about the execution of tests Who tested, when and what (version, revision, environment, etc.) Lists all executed test cases Results, remarks, bugs and issues of each test case Execution date&time, used data files, etc.
See TestCaseMatrix.xls, for example.
Test Case Design
Deriving test cases from use cases If the functional requirements are modelled as use cases it is sensible to utilize them in functional testing Testing is interested in the uncommon and abnormal scenarios One use case leads to several test cases Prioritize use cases and use this prioritization when prioritizing tests Prioritization in testing is the distribution of efforts (Not the order of execution) Maintain traceability between use cases and test cases Use cases are not complete specifications Testing only the conditions that are mentioned in use case is usually not enough
See Robert V. Binder’s “Extended Use Case Test Design Pattern” http://www.rbsc.com/docs/TestPatternXUC.pdf
Use case example: use case User slides a card through the card-reader Card-reader scans employee ID from card Exception 1: Card can’t be read System validates employee access Exception 2: Employee ID is invalid System unlocks door for configured time period Exception 3: System unable to unlock door Exception 4: Door is not opened User enters and door shuts Exception 5: Door is not shut Exception 6: Door fails to lock
System attempts to lock door
Use case example: test cases Test Case 1: Valid employee card is used Slide the card through the reader Test Case 2: Card can’t be read Swipe a card that is not valid Test Case 3: Invalid employee ID Swipe card with invalid employee ID Verify door is not unlocked Test Case 4: System unable to unlock door “ Injected” failure of unlocking mechanism Test Case 5: Door is not opened Don’t open the door and wait until timeout is exceeded Test Case 6: Door is not shut after entry Hold door open until timeout is exceeded Test Case 7: Door fails to lock
“ Injected” failure of locking mechanism
Error-guessing and ad hoc testing After systematic techniques have been used Can find some faults that systematic techniques can miss Supplements systematic techniques ” What is the craziest thing we can do?”
Lists in literature, error catalogs
Test Case Specification (IEEE Std 829) Test-case-specification identifier Test items : describes the detailed feature, code module and so on to be tested. Input specifications : specifies each input required to execute the test case (by value with tolerances or by name). Output specifications : describes the result expected from executing the test case. Results may be outputs and features (for example, response time) required of the test items. Environmental needs : Environmental needs are the hardware, software, test tools, facilities, staff, and so on to run the test case. Special procedural requirements : describes any special constraints on the test procedures which execute this test case (special set-up, operator intervention, …).
Intercase dependencies : lists the identifiers of test cases which must be executed prior to this test case, describes the nature of the dependencies.
A simple approach to test cases The common details of a test suite and test procedure (like test environment) are documented elsewhere Test catalogs are utilized to describe common details of test cases test all available ways of performing the function (menu, keyboard, gui buttons, menu short-cut, short-cut keys, …) This may leave too much space for an inexperienced tester Req 12.34 Test settings or preferences that affect this function Indenting the current line to the right, and left. Indenting the selected lines to the right, and left. Indent functionality 2 TC-12.34.5 … Notes Description Test case title Priority Test case ID
Moves the indentation, no aligning.
Test Catalogs Test catalog is a list of typical tests for a certain situation
Based on experience on typical errors that developer make
Common pitfalls in test case definition Poor test case organization One big pile of test cases Don’t know what a certain set of test cases actually tests or which cases test a certain functionality Don’t know what was tested after testing Prioritize and select the most important tests Consider the test case’s probability to reveal an important fault Writing too detailed step-by-step scripts Not enough time for detailed scripting
Few detailed, but irrelevant test cases designed and executed -> bad quality of testing, no major defects found
Example – how to manage test cases TestCaseMatrix.xls When do you write these test cases? (Hint: not at the end of the project)
Well-timed test design faults found early are cheaper to fix most significant faults found first faults prevented, not built in test design causes requirement changes Not too early test case design Design tests in implementation order Start test design from the most completed and probable features Test cases are designed during or after implementation, but incrementally Avoiding anticipatory test design and deprecated, incorrect test cases that are not based on the actual features If things change or specifications are not detailed enough for testing
Test planning must begin early, test case design not necessarily