• Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,711
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
145
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. SOFTWARE TESTING Presented to SENG 623 Software Quality Management 13-March-2003
  • 2. Agenda
    • Introductions
    • Software Testing Fundamentals
    • Software Testing Levels
    • Software Test Management
    • Concluding Remarks
    • Questions, Comments, & Great Thoughts
  • 3. Software Testing Fundamentals
  • 4. Software Testing Fundamentals Overview
    • Software Testing is the process that is used to
      • uncover errors before delivering the product to the customer, and
      • to give confidence that the product is working well.
  • 5.
    • Fundamental Limitation of Testing
    Software Testing Fundamentals Overview Testing can only show the presence of an error; testing can never prove the absence of all errors.
  • 6. Software Testing Fundamentals Test Design and Test Execution
    • The Testing Activity is more than just running tests
    • In order for them to be run, they must be designed and written.
    • When should tests be designed?
    • As Late As Possible?
    • As Early as Possible?
  • 7. Software Testing Fundamentals Testing Objectives
    • A good test case is one that has high probability of finding an as-yet-undiscovered error.
    • A successful test is one that uncovers an as-yet-undiscovered error.
  • 8. Software Testing Fundamentals Software Testing Techniques
    • Software Testing Techniques provide systematic approaches for designing tests that
      • Exercise the internal logic of software components
      • Exercise the input and output domains of the program to uncover errors in
        • program function,
        • behavior and
        • performance
  • 9. Software Testing Fundamentals Software Testing Techniques
    • Dynamic Analysis versus Static Analysis
    • Black Box versus White (Glass) Box Testing
  • 10. Software Testing Levels
  • 11. Software Testing Levels Verification and Validation Requirements Analysis System Design Component Design Integration Testing Unit Testing System/Validation Testing Coding Integrated into Verified Against Validated Against Developed into
  • 12. Software Testing Levels Unit Testing
    • Unit Testing focuses the effort on the smallest unit of software design.
    • The Unit Test has a Dynamic, White-Box orientation.
    Test Cases Module ~~~~~ ~~~~~ ……… User Interface Local data structure Boundary conditions Independent paths Error handling paths
  • 13. Software Testing Levels Unit Testing Metrics
    • Defect Tracking Metrics
    • Code Complexity
    • Test Completeness Metrics
    • Code Coverage - Test Effectiveness Ratios
      • Statement Coverage - TER 1
      • Branch/Decision Coverage - TER 2
      • Decision-condition Coverage
      • LCSAJ Coverage - TER 3
  • 14. Software Testing Levels (Component) Integration Testing
    • Integration Testing is a systematic technique for conducting tests to uncover the errors associated with interfacing.
    • As well, it will expose errors related to the larger functionality of the individual components under test.
    • Different incremental integration strategies:
      • Top-Down Integration
      • Bottom-Up Integration
    • Integration Testing is typically Dynamic, and is usually Black Box
    • Must also perform
      • Regression Testing
      • Smoke Testing
  • 15. Software Testing Levels (Component) Integration Testing Metrics
    • Error rates in design
    • Error rates in implementation
    • Error rates in test design
    • Test Execution Progress Metrics
    • Time/Cost Metrics
    • Requirements Churn Metrics
  • 16. Software Testing Levels System Testing
    • Software cannot be used alone, it has to be used under certain context. (e.g, hardware, people, information). The system testing is a series of different tests whose primary purpose is to fully exercise the computer-based system.
    • Types of system tests [Beizer, 1984] :
      • Recovery testing
      • Security testing
      • Stress testing
      • Performance testing
  • 17. Software Testing Levels System Testing Metrics
    • Test Completeness Metrics
    • Defect Arrival Rate
    • Cumulative Defects by Status
    • Defect Closure Rate
    • Reliability Predictions
    • Schedule Tracking Metrics
    • Staff and Resource Tracking Metrics
  • 18. Software Testing Levels Validation Testing
    • After System Testing, a final series of software tests – Validation Testing – begins.
    • The simple definition is that validation succeeds when software functions in a manner that can be reasonably expected by the customer.
    • Different approaches for Validation Testing:
      • Acceptance Tests
      • Alpha Test
      • Beta Test
  • 19. Software Testing Levels Validation Testing Metrics
    • Defect Arrival Rate
    • Cumulative Defects by Status
    • Defect Closure Rate
    • Defect Backlog by Severity
    • Cost of Quality Metrics
  • 20. Software Testing Levels Quality Views
    • Unit Testing
      • Manufacturing View of Quality
    • (Component) Integration Testing
      • Manufacturing View of Quality
    • System Testing
      • Product View of Quality
    • Validation Testing
      • User View of Quality
  • 21. Software Testing Management
  • 22. Software Testing Management Who Does the Testing?
    • Development Team
    • Independent Test Group (ITG)
    • Customer / Clients
  • 23. Software Testing Management Organizing for Software Testing
    • Software Engineers create programs, documentation and related artifacts.
    • The Software Engineer is proud of what has been built and looks askance at anyone who attempts to tear it down.
    • The ITG attempts to “break” the thing that the Software Engineer has build.
    • ITG is paid to find the errors.
    • The Software Developer is always responsible for testing the individual units of the program.
    • In many cases, the developer also conducts integration testing.
    • The ITG is part of the Software Development Project Team. It becomes involved during the specification activity and stays involves throughout the project.
  • 24. Software Testing Management Organizing for Software Testing
    • The Software Engineer does not turn the software over to ITG and walk away.
    • They have to work together closely throughout a software project to ensure that thorough tests will be conducted.
    • While the testing is conducted, the developer must be available to correct errors that are uncovered.
  • 25. Software Testing Management Organizing for Software Testing
    • The customer is always involved in the validation testing.
    • The relationship between Software Project Team and customers in Validation Testing is different from the relationship between the Software Engineer and ITG.
  • 26. Software Testing Management Responsibilities of Management
    • Senior Management at the Organization Level
      • Setting the testing policy, strategy and objectives for the company
      • Ensure that metrics for test effort and results are collected and used
      • Invest in Tool Support
      • Commit to improving the test process
  • 27. Software Testing Management Responsibilities of Management
    • Test Management at the Project Level
      • Assignment of Resources
      • Test Planning (before development)
      • Reviews and/or Inspections (throughout)
      • Test Design (before or in parallel with development)
      • Test Execution (after development)
      • Problem Resolution (whenever needed)
  • 28. Software Testing Management When To Stop Testing?
    • Early Release
      • may defects still left, with some at the “critical” level
      • reactive environment keeps team “fire fighting” rather than working on new product
      • often results in key employee turnover
      • frustrated customers
    “ There is no single, valid, rational criterion for stopping. Furthermore, given any set of applicable criteria, how each is weighted depends very much on the product, the environment, the culture and the attitude to risk. [Beizer 1990].
    • Late Release
      • team and users confident in quality of product
      • significant increase in product costs
      • organization experiences loss of revenue, smaller market share, project cancellations, but may gain reputation for quality
      • frustrated customers
  • 29. Software Testing Management When To Stop Testing?
    • No single metric should be used to determine when to stop testing
      • Defect Discovery Rate
      • Trend in Severity Level
      • Remaining Defects Estimation Criteria
      • Coverage Measurement
      • Reliability Growth Models
      • Running out of Resources
  • 30. Concluding Remarks
  • 31. Concluding Remarks Inspections vrs Testing
    • Both have their place
      • Inspections prevent defects
      • Testing identifies defects
    In an ancient land there was a family of healers, one of whom was known by all and was employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied, "I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords." "My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbours." "My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home."
  • 32. Concluding Remarks Keys to Successful Testing
    • In order to be cost effective, concentrate testing on the areas where it will be most effective.
    • Test the most critical parts (to the user) first
    • Concentrate the testing on the most difficult, most complex, hardest to write, least liked
    • Errors are “social” - test where historically the most errors have been found before.
    • Plan the testing so that when testing is stopped, the most effective testing in the time allotted has already been done.
  • 33. Concluding Remarks Testing Pitfalls
    • Absence of a Testing Policy
    • Failing to Plan Testing / Ineffective Testing
    • Lack of Schedule / Staff / Budget
    • To Little / To Much Test Data
    • Uneven Testing / Bad Coverage
    • Lack of Testing Tools / Belief of Test Tools
    • Lack of Testability of Software
  • 34.
    • The Testing Paradox
    • The Pesticide Paradox
    Concluding Remarks If testing finds errors in the product, how can it also build confidence that it is working well, since it was just shown that it wasn’t! Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. [Biezer, 1983]
  • 35. Questions, Comments, and Great Thoughts
  • 36. Thank You!