• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Quality assurance
 

Quality assurance

on

  • 1,760 views

 

Statistics

Views

Total Views
1,760
Views on SlideShare
1,758
Embed Views
2

Actions

Likes
0
Downloads
87
Comments
0

1 Embed 2

http://www.slideshare.net 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Ant-trail metaphor: Models nudge you in certain directions while you're taking actions or deciding what actions to take. They're like the pheromone trails that ants lay down: they don't necessarily represent the best way to acquire food, but they do nudge individual ants to more-or-less do the same thing as other ants with the same goal are doing. You can always deviate from the model because of local context - just like ants can randomly leave the trail - but doing so pushes against constraints. You are more likely to stay on the trail.

Quality assurance Quality assurance Presentation Transcript

  • T-76.(4/5)115 Software Project Quality Practices in Course Projects 18.10.2003 Juha Itkonen SoberIT
  • Contents
    • Testing as a part of incremental development
    • Exploratory peer testing approach
    • Test planning
    • Test reporting
    • Designing and managing test cases
  • Quality practices as part of incremental development
  • Quality practices are an integral part of sw development
    • Often, testing is seen as some separate, last phase of software development process
      • That can be outsourced to separate testing team
      • That only deeds to be done just before the release – if there is any time
    • Quality practices can not be separated from rest of the software development
      • Testing has to be involved from the beginning
      • Testers can, and should, contribute in each phase of the software development life-cycle
      • QA is much more than the final acceptance testing phase
  • The V-model of testing
    • V-model is an extension of the waterfall model
    • You can imagine a little V-model inside each iteration
      • However, you might want to be more iterative on iteration level, too.
    • Do not take the V-model as a process for the whole project
  • Two extremes of organizing testing Waterfall model Agile models (XP) Leading idea: Testing in collaboration Coders Testers Customer Coder Coder Tester
  • Strive for more agile approach on this course
    • You have fixed
      • Schedule
      • Resources
    • Flexibility is in scope and quality
      • Quality won’t appear without planning and explicit actions
    • You don’t have separate testing resources
    • You probably don’t have comprehensive documentation
    • You probably have more or less ambiguity and instability in your requirements
    • You don’t have too much effort to spend
    • You have big risks
  • Execute tests incrementally
    • Each iteration delivers tested software
    • Don’t plan test execution as a separate phase after development
    • Unit tests are executed as a part of coding activity
      • Test driven development
    • Functional system tests can be designed and executed simultaneously with implementation
      • Enables fast feedback
    • Remember tracking
      • What was tested
      • What version and environment
      • When it was tested
      • By whom
      • What were the results
      • How and for what purpose do you use the results?
  • Involve the customer early
    • Take the customer with you to specify or review the test cases
      • The customer plays the oracle role
    • Give the customer the opportunity to
      • Execute pre-specified or exploratory tests
      • Play around with the system
        • Before the FD phase
  • Peer testing with exploratory approach
    • Testing without predefined test cases
    • Manual testing
      • Based on experience, knowledge and skills of the tester
      • Without pre-documented test steps (detailed test cases)
    • Exploring the software or system
      • Goal is to expose quality-related information
      • Continually adjusting plans, re-focusing on the most promising risk areas
      • Following hunches
    • Minimizing time spent on (pre)documentation
    Exploratory Testing (ET) is
  • Exploratory Testing is not a technique
    • It is an approach
    • Many testing techniques can be used in exploratory way
    • Exploratory testing vs. scripted testing are the ends of a continuum
    Freestyle exploratory “bug hunting” Pure scripted (automated) testing Vague scripts Fragmentary test cases Chartered exploratory testing
  • Definition of Exploratory Testing
    • Tests are not defined in advance as detailed test scripts or test cases.
      • Instead, exploratory testing is exploration with a general mission without specific step-by-step instructions on how to accomplish the mission.
    • Exploratory testing is guided by the results of previously performed tests and the gained knowledge from them.
      • An exploratory tester uses any available information of the target of testing, for example a requirements document, a user’s manual, or even a marketing brochure.
    • The focus in exploratory testing is on finding defects by exploration
      • Instead of systematically producing a comprehensive set of test cases for later use.
    • Exploratory testing is simultaneous learning of the system under test, test design, and test execution.
    • The effectiveness of the testing relies on the tester’s knowledge, skills, and experience.
  • Scripted vs. Exploratory Testing
    • In scripted testing, tests are first designed and recorded. Then they may be executed at some later time or by a different tester.
    • In exploratory testing, tests are designed and executed at the same time, and they often are not recorded.
      • You build a mental model of the product while you test it. This model includes what the product is and how it behaves, and how it’s supposed to behave
    Tests Product Tests James Bach, Rapid Software Testing, 2002
  • Exploratory Function Testing
    • Use list of functions to give structure and high level guide to your testing
      • Requirements specification
      • Functional specification
      • User manual
    • Explore creatively each individual function and interactions of functions
      • Cover side paths, interesting and suspicious areas
        • Exceptional inputs, error situations
      • Utilize the information gained during the testing
        • Simultaneous learning
      • Tests are designed simultaneously with test execution
      • Use the list of functions to get back on track
    • Coverage and progress is planned and tracked by functions
      • Not by test cases
  • Session Based Test Management A method for managing ET
    • Charter
    • Time Box
    • Reviewable Result
    • Debriefing
  • Charter
    • Architecting the Charters is test planning
    • Brief information / guidelines on:
      • What should be tested?
        • Areas, components, features, …
      • Why do we test this?
        • goals
      • How to test (approach)?
        • Specific techniques or tactics to be used
        • Test data
      • What problems to look for?
    • Might include guidelines on:
      • Tools to use
      • What risks are involved
      • Documents to examine
      • Desired output from the testing
  • Time Box
    • Focused test effort of fixed duration
    • Brief enough for accurate reporting
    • Brief enough to allow flexible scheduling
    • Brief enough to allow course correction
    • Long enough to get solid testing done
    • Long enough for efficient debriefings
    • Beware of overly precise timing
      • Short: 60 minutes (+-15)
      • Normal: 90 minutes (+-15)
      • Long: 120 minutes (+-15)
  • Reviewable results
    • Charter
    • Effort Breakdown
      • Duration (hours:minutes)
      • Test design and execution (percent)
      • Bug investigation and reporting (percent)
      • Session setup (percent)
      • Charter / opportunity (percent/percent)
    • Data Files
    • Test Notes
    • Bugs
    • Issues
  • Debriefing
    • The test lead reviews session sheet to assure that he understands it and that it follows the protocol
    • The tester answers any questions
    • Session metrics are checked
    • Charter may be adjusted
    • Session may be extended
    • New sessions may be chartered
    • Coaching / Mentoring happens
  • Peer Testing in I2 iteration
    • Peer group pairs are on course web pages
    • Plan and prepare for peer testing already before I2
      • Delivering and installing the system
      • Meetings (preparation and debriefing)
      • Agreeing on total effort
    • 17.2.2005 - Hand-off the system to the peer group
      • The system under test
      • All relevant documentation
        • User and installation manual
        • Known bugs, bug reporting guidelines
      • Test Charter (at least 2 charters)
        • one general charter, provided by course
        • and at least 1 from the group whose system is tested
    • Peer testing execution
    • 21.2.2005 - Peer testing reports delivered to the other group
      • Agree this with your peer group
  • Peer test reporting
    • Iteration I2 peer test deliverables
      • Peer test reports and session logs x 2 (Own and peer group’s report)
      • Defect reports directly into bug tracking system
        • Peer testing defect reports into the other group’s system
        • Bug summary listing as an appendix in the test report
      • In the final report you should assess peer group’s testing efforts and results
  • Test Planning
  • Checklist for test planning
    • Overall test objectives (why)
    • What will and won’t be tested (what)
    • Test approach (how)
      • Test phases
      • Test strategy, methods, techniques, …
      • Metrics and statistics
    • Resource requirements (who)
      • Tester assignments and responsibilities
      • Test environments
    • Test tasks and schedule (when)
    • Risks and issues
  • Overall test objectives (why)
    • The quality goals of the project
    • What is to be achieved by the quality practices?
    • and what are the most important qualities and risks for this product?
    • Why are we testing?
    • This course
      • Plan and document your quality goals in project plan chapter 5.2.1
      • Metrics that are used to evaluate the quality of the results in the end of each iteration
        • Plan and document in project plan chapter 5.2.1
        • Should be visible in project plan chapter 6.
  • What will and won’t be tested (scope)
    • Identify components and features of the software under test
      • High-enough abstraction level
      • Prioritize
    • Both functional and non-functional aspects
    • Consider time, resources and risks
      • Everything can’t be tested and everything that is tested can’t be tested thoroughly
    • Identify separately components and features that are not tested
    • This course
      • Document in project plan chapter 5.2.2
      • For each iteration
  • Test case organization and tracking
    • Prioritizing tests
      • The most severe failures
      • The most likely faults
      • Priorities of use cases
        • End-user prioritizing the requirements
      • Most faults in the past
      • Most complex or critical
      • Positive / negative
    • Create test suites
      • Test-to-Pass (Positive testing)
      • Test-to-Fail (Negative testing)
      • Smoke test suite
      • Regression test suite
      • Functional suites
      • Different platforms
      • Priorities
  • Test approach (how)
    • How testing is performed in general and in each iteration
      • Levels of testing
      • Test techniques
        • Functional, non-functional
        • Methods and techniques
        • Tools and automation
        • exploratory testing
          • Used in peer testing (use also to supplement the planned tests)
    • What other QA activities are used and how
      • document/code reviews or inspections
      • coding standard
      • collecting continuous feedback from the customer
    • Reporting and defect management procedures
      • how the testing results are utilized and the feedback provided to steering the project
    • Scope of test documentation
      • On what level and how test cases are documented
      • What other test documentation is produced
    • This course
      • Plan the approach and document in the project plan
      • General approach in chapter 5.2.1
      • Details for each iteration in chapters 5.2.2
  • Resource requirements (who)
    • People
      • How many, what expertise
      • Responsibilities
    • Equipment
      • Computers, test hardware, printers, tools.
    • Office and lab space
      • Where will they be located? How big will they be? How will they be arranged?
    • Tools and documents
      • Word processors, databases, custom tools. What will be purchased, what needs to be written?
    • Miscellaneous supplies
      • Disks, phones, reference books, training material. Whatever else might be needed over the course of the project.
    • This course
      • Document in the project plan
    • Define responsibilities
    • Identify limited / critical resources
    • Location and availability
  • Test environments
    • Identification of test environments
      • Hardware, software, os, network, ...
    • Prioritization and focusing test suites on each
      • Number of combinations can be huge
      • Regression testing in different environments
    • Scheduling implications
    • Test lab
      • Different hardware and software platforms
      • Cleaning the machines
      • Setting up the test data
      • Moving from platform to another
      • People vs. hardware needs
    • This course
      • Plan carefully what is a realistic goal for testing in different environments
        • Quality goals of the project
      • Prioritize
      • Document your choices in the test plan
  • Testing tasks and schedule (who)
    • Work Breakdown Structure (WBS)
      • Areas of the software
      • Testable features
      • Assigning responsibilities
    • Mapping testing to overall project schedule
    • Both duration and effort
    • Build schedule
      • Number of test cycles
      • Regression tests
    • Releases
      • External links, i.e. Beta testing
    • Consider using relative dates
    • This course
      • Document in the project plan
      • If you are going to do e.g. usability testing and performance testing or code reviews there should be corresponding tasks in the project schedule
  • QA planning during iteration I1 planning DL 31.10.
    • Project level
    • Identify quality goals
    • Plan QA approach (strategy)
      • How to achieve the goals
      • Document in project plan chapter 5.2
    • Plan test environments and tools
      • Document in project plan chapter 5.3
    • Plan test case organization and tracking
    • Deliverables and metrics
    • How the results are used
      • For what purpose
    • Iteration level
    • What will be tested
      • Features, quality attributes
      • What won’t be tested
    • Details of the QA approach
      • What QA practices are used
      • How practices are used
      • Priorities of testing
    • Testing rounds
      • i.e., how many times and when certain tests are executed
    • Tasks and schedule
      • Resources
      • Responsibilities
      • Test deliverables
    • Document in the project plan chapter 6.
    You have less than 2 weeks to do project level and I1 QA planning!
  • Test Reporting
  • Defect tracking and reporting
    • Why defect tracking
      • You don’t forget found defects
      • You get metrics
    • Think what bugs are reported and when
      • During coding?
      • After inspection?
      • Not before system testing?
    • Bug lifecycle
      • When and how bugs are managed
      • When and what bugs are fixed
        • Who decides, when and how
    • Use Bugzilla or some other defect tracking system
      • Bugzilla provided by the course
    • Document your choices in project plan chapter 5.2
  • Bug metrics
    • Description of severe bugs found and open
      • other QA metrics
        • unit test coverage
        • code reviews
        • source code metrics
    50 45 5 Closed 35 5 Open 85 75 10 Reported Total I2 I1 10 5 Major 35 17 10 2 1 Total open 75 49 15 1 0 This iteration reported Total Trivial Minor Critical Block
  • Quality assessment 1/2
    • Max 10-20 functional areas
    • Testers’ assessment of the current quality status of the system
    • You can plan your own qualitative scales
    Legend Coverage: 0 = nothing 1 = we looked at it 2 = we checked all functions 3 = it’s tested Quality:  = quality is good  = not sure  = quality is bad Only few minor defects found, very efficient implementation.  2 File conversions Nothing serious yet  1 Admin tools 2 critical bugs found during last test round, lot of small problems  3 Encoder Not started  0 GUI editor Comments Quality Coverage Functional area
  • Quality assessment 2/2
    • Evaluate the quality of the different functional areas of the system
      • how much effort has been put on test execution
      • what is the coverage of testing
      • what can you say about the quality of the particular component based on your test results and ’gut feeling’ during testing
      • e.g. is the number of reported bugs low because of lack of testing or high because of intensive testing
    • Assess the quality status of the system against the quality goals of the project
  • Test report and log
    • Test report template provided
      • Summary of testing tasks and results
        • No detailed lists of passed and failed test cases
      • Includes evaluation of the quality
    • Test log
      • Provides a chronological record of relevant details about the execution of tests
      • Who tested, when and what (version, revision, environment, etc.)
      • Lists all executed test cases
      • Results, remarks, bugs and issues of each test case
      • Execution date&time, used data files, etc.
      • See TestCaseMatrix.xls, for example.
  • Test Case Design
  • Deriving test cases from use cases
    • If the functional requirements are modelled as use cases it is sensible to utilize them in functional testing
    • Use case != test case
      • Testing is interested in the uncommon and abnormal scenarios
      • One use case leads to several test cases
    • Prioritize use cases and use this prioritization when prioritizing tests
      • Prioritization in testing is the distribution of efforts
      • (Not the order of execution)
    • Maintain traceability between use cases and test cases
    • Use cases are not complete specifications
      • Testing only the conditions that are mentioned in use case is usually not enough
    • See Robert V. Binder’s “Extended Use Case Test Design Pattern” http://www.rbsc.com/docs/TestPatternXUC.pdf
  • Use case example: use case
    • User slides a card through the card-reader
    • Card-reader scans employee ID from card
      • Exception 1: Card can’t be read
        • Log event
        • Use case ends
    • System validates employee access
      • Exception 2: Employee ID is invalid
        • Log event
        • Use case ends
    • System unlocks door for configured time period
      • Exception 3: System unable to unlock door
        • Log event
        • Use case ends
    • User opens door
      • Exception 4: Door is not opened
        • System waits for timeout
        • System locks door
        • Use case ends
    • User enters and door shuts
      • Exception 5: Door is not shut
        • System waits for timeout
        • Log event
        • Set alarm condition
        • Use case ends
    • System locks door
      • Exception 6: Door fails to lock
        • System attempts to lock door
        • Log event
        • Set alarm condition
        • Use case ends
  • Use case example: test cases
    • Test Case 1: Valid employee card is used
      • Slide the card through the reader
      • Verify door is unlocked
      • Enter building
      • Verify door is locked
    • Test Case 2: Card can’t be read
      • Swipe a card that is not valid
      • Verify event is logged
    • Test Case 3: Invalid employee ID
      • Swipe card with invalid employee ID
      • Verify door is not unlocked
      • Verify event is logged
    • Test Case 4: System unable to unlock door
      • Swipe card
      • “ Injected” failure of unlocking mechanism
      • Verify event is logged
    • Test Case 5: Door is not opened
      • Swipe card
      • Verify door is unlocked
      • Don’t open the door and wait until timeout is exceeded
      • Verify door is locked
    • Test Case 6: Door is not shut after entry
      • Swipe card
      • Enter building
      • Hold door open until timeout is exceeded
      • Verify alarm is sounded
      • Verify event is logged
    • Test Case 7: Door fails to lock
      • Swipe card
      • Enter building
      • “ Injected” failure of locking mechanism
      • Verify alarm is sounded
      • Verify event is logged
  • Error-guessing and ad hoc testing
    • Always worth including
    • After systematic techniques have been used
    • Can find some faults that systematic techniques can miss
    • Supplements systematic techniques
    • Consider
      • Past failures
      • Intuition
      • Experience
      • Brain storming
      • ” What is the craziest thing we can do?”
      • Lists in literature, error catalogs
  • Test Case Specification (IEEE Std 829)
    • Test-case-specification identifier
    • Test items : describes the detailed feature, code module and so on to be tested.
    • Input specifications : specifies each input required to execute the test case (by value with tolerances or by name).
    • Output specifications : describes the result expected from executing the test case. Results may be outputs and features (for example, response time) required of the test items.
    • Environmental needs : Environmental needs are the hardware, software, test tools, facilities, staff, and so on to run the test case.
    • Special procedural requirements : describes any special constraints on the test procedures which execute this test case (special set-up, operator intervention, …).
    • Intercase dependencies : lists the identifiers of test cases which must be executed prior to this test case, describes the nature of the dependencies.
  • A simple approach to test cases
    • The common details of a test suite and test procedure (like test environment) are documented elsewhere
      • Avoiding copy paste
      • Test catalogs are utilized to describe common details of test cases
        • test all available ways of performing the function (menu, keyboard, gui buttons, menu short-cut, short-cut keys, …)
        • Test settings or preferences that affect this function
    This may leave too much space for an inexperienced tester Req 12.34
    • Indenting the current line to the right, and left.
    • Indenting the selected lines to the right, and left.
      • Moves the indentation, no aligning.
    Indent functionality 2 TC-12.34.5 … Notes Description Test case title Priority Test case ID
  • Test Catalogs
    • Test catalog is a list of typical tests for a certain situation
    • Based on experience on typical errors that developer make
  • Common pitfalls in test case definition
    • Poor test case organization
      • One big pile of test cases
      • Don’t know what a certain set of test cases actually tests or which cases test a certain functionality
      • Don’t know what was tested after testing
    • Testing wrong things
      • Prioritize and select the most important tests
      • Consider the test case’s probability to reveal an important fault
    • Writing too detailed step-by-step scripts
      • Not enough time for detailed scripting
      • Few detailed, but irrelevant test cases designed and executed -> bad quality of testing, no major defects found
      • Don’t program people
  • Example – how to manage test cases TestCaseMatrix.xls When do you write these test cases? (Hint: not at the end of the project)
  • Well-timed test design
    • Early test design
      • test design finds faults
      • faults found early are cheaper to fix
      • most significant faults found first
      • faults prevented, not built in
      • no additional effort
        • re-schedule test design
      • test design causes requirement changes
    • Not too early test case design
      • Design tests in implementation order
        • Start test design from the most completed and probable features
        • Test cases are designed during or after implementation, but incrementally
      • Avoiding anticipatory test design and deprecated, incorrect test cases that are not based on the actual features
        • If things change or specifications are not detailed enough for testing
      • Test planning must begin early, test case design not necessarily