Published on

  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. Chapter 2 Testing: Introduction Dssouli rachida CIISE Concordia Unieversity [email_address] Acknowledgements: Dr. Jerry Gao, Dr. Hong, Dr. En-Nouaary and many others
  2. 2. Introduction outline Verification and validation What is Software Testing? - Definitions - Testing Objectives - Who Does Software Testing? - Software Testing Activities - Software Testing Scope - Software Testing Principles - Software Testing Process - Software Testing Myths - Software Testing Limits - Different Types of Software Testing - Testing fundamentals
  3. 3. Techniques for V&V <ul><li>Static </li></ul><ul><ul><li>Collects information about a software without executing it </li></ul></ul><ul><ul><ul><li>Reviews, walkthroughs, and inspections </li></ul></ul></ul><ul><ul><ul><li>Static analysis </li></ul></ul></ul><ul><ul><ul><li>Formal verification </li></ul></ul></ul><ul><li>Dynamic </li></ul><ul><ul><li>Collects information about a software with executing it </li></ul></ul><ul><ul><ul><li>Testing: finding errors </li></ul></ul></ul><ul><ul><ul><li>Debugging: removing errors </li></ul></ul></ul>
  4. 4. Static Analysis <ul><li>Control flow analysis and data flow analysis </li></ul><ul><ul><li>Extensively used for compiler optimization and software engineering </li></ul></ul><ul><li>Examples </li></ul><ul><ul><li>Unreachable statements </li></ul></ul><ul><ul><li>Variables used before initialization </li></ul></ul><ul><ul><li>Variables declared but never used </li></ul></ul><ul><ul><li>Variables assigned twice but never used between assignments </li></ul></ul><ul><ul><li>Variables used twice with no intervening assignment </li></ul></ul><ul><ul><li>Possible array bound violations </li></ul></ul>
  5. 5. Formal Verification <ul><li>Given a model of a program and a property, determine whether the model satisfies the property based on mathematics </li></ul><ul><li>Examples </li></ul><ul><ul><li>Safety </li></ul></ul><ul><ul><ul><li>If the light for east-west is green, then the light for south-north should be red </li></ul></ul></ul><ul><ul><li>Liveness </li></ul></ul><ul><ul><ul><li>If a request occurs, there should be a response eventually in the future </li></ul></ul></ul>
  6. 6. Several definitions: “Testing is the process of establishing confidence that a program or system does what it is supposed to.” by Hetzel 1973 “Testing is the process of executing a program or system with the intent of finding errors.” by Myers 1979 “Testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.” by Hetzel 1983 What is Software Testing
  7. 7. - One of very important software development phases - A software process based on well-defined software quality control and testing standards, testing methods, strategy, test criteria, and tools. - Engineers perform all types of software testing activities to perform a software test process. - The last quality checking point for software on its production line What is Software Testing
  8. 8. The Major Objectives of Software Testing: - Uncover as many as errors (or bugs) as possible in a given product. - Demonstrate a given software product matching its requirement specifications. - Validate the quality of a software testing using the minimum cost and efforts. - Generate high quality test cases, perform effective tests, and issue correct and helpful problem reports. Major goals: uncover the errors (defects) in the software, including errors in: - requirements from requirement analysis - design documented in design specifications - coding (implementation) - system resources and system environment - hardware problems and their interfaces to software Testing Objectives
  9. 9. - Test manager - manage and control a software test project - supervise test engineers - define and specify a test plan - Software Test Engineers and Testers - define test cases, write test specifications, run tests - Independent Test Group - Development Engineers - Only perform unit tests and integration tests - Quality Assurance Group and Engineers - Perform system testing - Define software testing standards and quality control process Who does Software Testing
  10. 10. Testing Oracle Software Apply input Observe output Validate the observed output Is the observed output the same as the expected output?
  11. 11. Some Limitations of Testing (I) <ul><li>To test all possible inputs is impractical or impossible </li></ul><ul><li>To test all possible paths is impractical or impossible </li></ul>int foo(int x) { y = very-complex-computation(x); write(y); } int foo(int x) { for (index = 1; index < 10000; index++) write(x); }
  12. 12. Limitations of Testing (II) <ul><li>Dijkstra, 1972 </li></ul><ul><ul><li>Testing can be used to show the presence of bugs, but never their absence </li></ul></ul><ul><li>Goodenough and Gerhart, 1975 </li></ul><ul><ul><li>Testing is successful if the program fails </li></ul></ul><ul><li>The (modest) goal of testing </li></ul><ul><ul><li>Testing cannot guarantee the correctness of software but can be effectively used to find errors (of certain types) </li></ul></ul>
  13. 13. Economics of Testing (I) <ul><li>The characteristic S-curve for error removal </li></ul>Number of defects found Time spent testing Cutoff point Testing is effective We need other techniques
  14. 14. Economics of Testing (II) <ul><li>Testing tends to intercept errors in order of their probability of occurrence </li></ul>Number of defects Less likely = More critical Progress of testing Found Not yet found
  15. 15. Economics of Testing (III) <ul><li>Verification is insensitive to the probability of occurrence of errors </li></ul>Number of defects Less likely = More critical Progress of verification Found Not yet found
  16. 16. Fundamental Questions in Testing <ul><li>When can we stop testing? </li></ul><ul><ul><li>Test coverage </li></ul></ul><ul><li>What should we test? </li></ul><ul><ul><li>Test generation </li></ul></ul><ul><li>Is the observed output correct? </li></ul><ul><ul><li>Test oracle </li></ul></ul><ul><li>How well did we do? </li></ul><ul><ul><li>Test efficiency </li></ul></ul><ul><li>Who should test your program? </li></ul><ul><ul><li>Independent V&V </li></ul></ul>
  17. 17. Software Testing Scope Software Testing Process Software Testing Management Configuration Management Software Problem Management Software Testing Methods Software Test Criteria Software Testing Tools Software Test Models Software Testing Strategies
  18. 18. - Test Planing Define a software test plan by specifying: - a test schedule for a test process and its activities, as well as assignments - test requirements and items - test strategy and supporting tools - Test Design and Specification - Conduct software design based on well-defined test generation methods. - Specify test cases to achieve a targeted test coverage. - Test Set up: - Testing Tools and Environment Set-up - Test Suite Set-up - Test Operation and Execution - Run test cases manually or automatically Software Testing Activities
  19. 19. - Test Result Analysis and Reporting Report software testing results and conduct test result analysis - Problem Reporting Report program errors using a systematic solution. - Test Management and Measurement Manage software testing activities, control testing schedule, measure testing complexity and cost - Test Automation - Define and develop software test tools - Adopt and use software test tools - Write software test scripts and facility - Test Configuration Management - Manage and maintain different versions of software test suites, test environment and tools, and documents for various product versions. Software Testing Activities
  20. 20. Verification and Validation Software testing is one element of a broader topic that is often referred to as ===> Verification and Validation (V&V) Verification --> refers to the set of activities that ensure that software correctly implements a specific function. Validation -> refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. Boehm [BOE81]: Verification: “Are we building the product right?” Validation: “Are we building the right product?” The definition of V&V encompasses many of SQA activities, including formal technical reviews, quality and configuration audits performance monitoring, different types of software testing feasibility study and simulation
  21. 21. <ul><li>Principle #1: Complete testing is impossible. </li></ul><ul><li>Principle #2: Software testing is not simple activity. </li></ul><ul><ul><li>Reasons: </li></ul></ul><ul><ul><ul><li>Quality testing requires testers to understand a system/product completely </li></ul></ul></ul><ul><ul><ul><li>Quality testing needs adequate test set, and efficient testing methods </li></ul></ul></ul><ul><ul><ul><li>A very tight schedule and lack of test tools. </li></ul></ul></ul><ul><li>Principle #3: Testing is risk-based. </li></ul><ul><li>Principle #4: Testing must be planned. </li></ul><ul><li>Principle #5: Testing requires independence (SQA team). </li></ul><ul><li>Principle #6: Quality software testing depends on: </li></ul><ul><ul><ul><li>Good understanding of software products and related domain application </li></ul></ul></ul><ul><ul><ul><li>Cost-effective testing methodology, coverage, test methods, and tools. </li></ul></ul></ul><ul><ul><ul><li>Good engineers with creativity, and solid software testing experience </li></ul></ul></ul>Software Testing Principles
  22. 22. - We can test a program completely. In other words, we test a program exhaustively. - We can find all program errors as long as test engineers do a good job. - We can test a program by trying all possible inputs and states of a program. - A good test suite must include a great number of test cases. - Good test cases always are complicated ones. - Software test automation can replace test engineers to perform good software testing. - Software testing is simple and easy. Anyone can do it. No training is needed. Software Testing Myths
  23. 23. - Due to the testing time limit, it is impossible to achieve total confidence. - We can never be sure the specifications are 100% correct. - We can never be certain that a testing system (or tool) is correct. - No testing tools can cop with every software program. - Tester engineers never be sure that they completely understand a software product. - We never have enough resources to perform software testing. - We can never be certain that we achieve 100% adequate software testing. Software Testing Limits
  24. 24. Software Testing Process Unit test Integration test Validation test System test System engineering Requirements Software Design Code & Implementation V&V Targets
  25. 25. Unit Test (Component Level Test) What? Unit testing: Individual components are tested independently to ensure their quality. The focus is to uncover errors in design and implementation, including Why? - data structure in a component - program logic and program structure in a component - component interface - functions and operations of a component Who? : developers of the components. Operations and Functions with I/O White-box interface input output Internal logic, data, structure output input interface operation Black-box
  26. 26. Integration Testing What? Integration test: A group of dependent components are composed and tested together to ensure their the quality of their integration unit. Why? The objective is to detect errors in: - Design and construction of software architecture - Integrated functions or operations at sub-system level - Interfaces and interactions between them - Resource integration and/or environment integration Who? Either developers and/or test engineers. Component #1 Operations and Functions with I/O input interface operation Component #2 Operations and Functions with I/O output interface operation
  27. 27. System Validation Testing What? System val. test: The entire integrated software that composes the system is tested based on requirements to ensure that we have a right product. Why ? The objective is to detect errors in: - System input/output - System functions and information data - System interfaces with external parts - User interfaces - System behavior and performance Who? test engineers in ITG or SQA people. System (Operations & Functions & Behavior) User interface User External interfaces
  28. 28. System Testing What? System test: The system software is tested as a whole. It verifies all elements mesh properly to make sure that all system functions and performance are achieved in the target environment. Why? The focus areas are: - System functions and performance - System reliability and recoverability (recovery test) - System installation (installation test) - System behavior in the special conditions (stress and load test) - System user operations (acceptance test/alpha test) - Hardware and software integration and collaboration - Integration of external software and the system Who? Test engineers in ITG or SQA people. When a system is to be marketed as a software product, a testing process called beta testing is often used. “ beta testing”
  29. 29. Test Issues in Real World Software testing is very expensive. How to achieve test automation? When should we stop software testing? Test criteria, test coverage, adequate testing. The test is cost function
  30. 30. Types of Testing Aspect Accessibility Level functional robustness performance reliability usability unit integration system acceptance white box grey box black box regression
  31. 31. Levels of Testing What users really need Requirements Design Code Acceptance testing System testing Integration testing Unit testing
  32. 32. System Testing <ul><li>Tests the overall system (the integrated hardware and software) to determine whether the system meets its requirements </li></ul><ul><li>Focuses on the use and interaction of system functionalities rather than details of implementations </li></ul><ul><li>Should be carried out by a group independent of the code developers </li></ul><ul><li>Should be planned with with the same rigor as other phases of the software development </li></ul>
  33. 33. Regression Testing <ul><li>Development versus maintenance </li></ul><ul><ul><li>Development costs: 1/3 </li></ul></ul><ul><ul><li>Maintenance costs: 2/3 </li></ul></ul><ul><li>Testing in maintenance phase </li></ul><ul><ul><li>How can we test modified or newly inserted programs? </li></ul></ul><ul><ul><ul><li>Ignore old test suites and make new ones from the scratch or </li></ul></ul></ul><ul><ul><ul><li>Reuse old test suites and reduce the number of new test suites as many as possible </li></ul></ul></ul>
  34. 34. Accessibility of Testing <ul><li>White box testing (structural testing, program-based testing) </li></ul><ul><ul><li>Assumes that the program is available </li></ul></ul><ul><ul><ul><li>Derives test cases from the program </li></ul></ul></ul><ul><ul><ul><li>Controls and observes the internal structure of the program </li></ul></ul></ul><ul><li>Black box testing (functional testing, specification-based testing) </li></ul><ul><ul><li>Assumes that the program is unavailable or testers do not want to look at the details of the program </li></ul></ul><ul><ul><ul><li>Derives test cases from the requirements of the program </li></ul></ul></ul><ul><ul><ul><li>Controls and observes the program only through external interfaces </li></ul></ul></ul><ul><li>Grey box testing </li></ul>
  35. 35. Program-Based Testing (I) <ul><li>Main steps </li></ul><ul><ul><li>Examine the internal structure of a program </li></ul></ul><ul><ul><li>Design a set of inputs satisfying a coverage criterion </li></ul></ul><ul><ul><li>Apply the inputs to the program and collect the actual outputs </li></ul></ul><ul><ul><li>Compare the actual outputs with the expected outputs </li></ul></ul><ul><li>Limitations </li></ul><ul><ul><li>Cannot catch omission errors </li></ul></ul><ul><ul><ul><li>What requirements are missing in the program? </li></ul></ul></ul><ul><ul><li>Cannot provide test oracles </li></ul></ul><ul><ul><ul><li>What is the expected output for an input? </li></ul></ul></ul>
  36. 36. Program-Based Testing (II) Program Apply input Observe output Validate the observed output against the expected output Who will take care of test oracles?
  37. 37. Specification-Based Testing (I) <ul><li>Main steps </li></ul><ul><ul><li>Examine the structure of the program’s specification </li></ul></ul><ul><ul><li>Design a set of inputs from the specification satisfying a coverage criterion </li></ul></ul><ul><ul><li>Apply the inputs to the specification and collect the expected outputs </li></ul></ul><ul><ul><li>Apply the inputs to the program and collect the actual outputs </li></ul></ul><ul><ul><li>Compare the actual outputs with the expected outputs </li></ul></ul><ul><li>Limitations </li></ul><ul><ul><li>Specifications are not usually available </li></ul></ul><ul><ul><ul><li>Many companies still have only code, there is no other document. </li></ul></ul></ul>
  38. 38. Specification-Based Testing (II) Specification Apply input Validate the observed output against the expected output Program Actual output Expected output
  39. 39. Object-Oriented Program Testing <ul><li>Unit testing for OO Programs </li></ul><ul><ul><li>A class is a set of variables and member functions </li></ul></ul><ul><ul><li>50% of member functions are just 10 lines of code </li></ul></ul><ul><ul><li>A class is often a unit of testing in C++ or Java </li></ul></ul><ul><li>Integration testing for OO Programs </li></ul><ul><ul><li>Rule of thumb in OO development </li></ul></ul><ul><ul><ul><li>Make a large number of small classes in a bottom-up fashion </li></ul></ul></ul><ul><ul><li>There are several relationships between classes </li></ul></ul><ul><ul><ul><li>Association, aggregation, inheritance, concurrency </li></ul></ul></ul>
  40. 40. - User interface errors, such as output errors, incorrect user messages. - Function errors - Defect hardware - Incorrect program version - Testing errors - Requirements errors - Design errors - Documentation errors - Architecture errors - Module interface errors - Performance errors - Error handling - Boundary-related errors - Logic errors, such as calculation errors - State-based behavior errors - Communication errors - Program structure errors, such as control-flow errors Categories of Software Errors
  41. 41. Like other activities in software engineering phases, it is impossible to have a cost-effective software test process without a very good planing, The major objective of software test planing: - generate a well-defined software test plan. What content should be included in a software test plan? - Testing activities and schedule - Testing tasks and assignments - Selected test strategy and test models - Test methods and criteria - Required test tools and environment - Problem tracking and reporting - Test cost estimation Other needed items: quality control process and standards Software Test Planing
  42. 42. Software Test Requirements Before starting test design, we must identify our test objectives, focuses, and test items. The major purpose is to help us understand what are the targets of software testing. This step can be done based on: - Requirements specifications - Inputs from developers - Feedback from customers Benefits are: - Identify and rank the major focus of software testing - Check the requirements to see if they are correct, completed, and testable - Enhance and update system requirements to make sure they are testable - Support the decision on selecting or defining test strategy The essentials of testing requirements include: - Specified testing methods - Required test types am d test coverage criteria - Selected or required test tools - Testing focuses and test items for each type of software testing An example of performance testing requirements: “Check the system performance to make sure it meet 99% system reliability requirements” A typical example for required test items is: Test item #I: “Test the call waiting feature (REQ #j) during system testing based on the given requirements specifications.”
  43. 43. Software Test Design Software test design is an important task for software test engineers. A good test engineer always know how to come out quality test cases and perform effective tests to uncover as many as bugs in a very tight schedule. What do you need to come out an effective test set ? - Choose a good test model and an effective testing method - Apply a well-defined test criteria - Generate a cost-effective test set based on the selected test criteria - Write a good test case specification document What is a good test case? - It must have a high probability to discover a software error - It is designed to aim at a specific test requirement - It is generated by following an effective test method - It must be well documented and easily tracked - It is easy to be performed and simple to spot the expected results - It avoids the redundancy of test cases
  44. 44. Software Test Design What content should be included in a test case?. Test Case ID: Test Item: Wrote By: (tester name) Documented Date: Test Type: Test Suite#: Product Name: Release and Version No.: Test case description: Operation procedure: Pre-conditions: Post-conditions: Inputs data and/or events: Expected output data and/or events: Required test scripts:
  45. 45. Test execution can be performed: - using manual approach - using a systematic approach - using a semi-automatic approach Basis activities in test execution are: - Select a test case - Set up the pre-conditions for a test case - Set up test data - Run a test case following its procedure - Track the test operations and results - Monitor the post-conditions of a test case & expect the test results - Verify the test results and report the problems if there is any - Record each test execution Software Test Execution
  46. 46. What do you need to perform test execution? - a test plan - test design specification with test case specifications - a test suite with documented test cases (optional) - test supporting facility, such as test drivers, test stubs, simulators What are the outcome of an test execution: - Text execution record and report - Problem and bug reports - Error logs and With automatic test execution tools (or test control tools), we can do: - automatic test runner - record and replay Software Test Execution
  47. 47. Problem Analysis and Report When do we issue a problem? Whenever a bug or problem is found, we need to write down a problem report immediately. What are the content of a problem report? Problem ID current software name, release no. and version no. Test type Reported by Reported date Test case ID Subsystem (or module name) Feature Name (or Subject) Problem type (REQ, Design, Coding, …) Problem severity (Fatal, Major, Minor, ..) Problem summary and detailed description Cause analysis How to reproduce? Attachments
  48. 48. Problem Analysis and Report <ul><li>How to track, control, and manage issued problems? </li></ul><ul><li>- A systematic solution is needed to track and maintain the reported problems in a repository. </li></ul><ul><li>- Define and implement a problem control and analysis process to control problem </li></ul><ul><li>tracking, reporting, analysis, fixing and resolutions. </li></ul><ul><li>Characteristics of a problem report: </li></ul><ul><li>- Simple and understandable - Traceable and numbered </li></ul><ul><li>- Reproducible - Non-judgmental </li></ul><ul><li>Problem analysis: </li></ul><ul><ul><li>-Finding the most serious consequences </li></ul></ul><ul><ul><li>-Finding the simplest and most general conditions </li></ul></ul><ul><ul><li>-Finding alternative paths to the same problem </li></ul></ul><ul><ul><li>-Finding related problems </li></ul></ul>
  49. 49. Software Test Review What is a review? - A review is a verification activity to assure a correct method has been used to create a software product. - Participants in a review take full responsibility for results. There are two types of reviews: - Formal reviews: - use a well-defined review method (or technique) - generate formal review results - Informal reviews - use a desk checking approach - conduct an informal review - generate information review results
  50. 50. Software Test Review Products should be reviewed during software testing: Test Plan Test Design Specification Test Procedure Specification Test Report Problem Reports What do reviews accomplish? - Reviews provide the primary mechanism for reliably evaluating progress. - Reviews train and educate the participants and have a significant positive effect on staff competence. - Reviews give early feedback and prevent more serious problems from arising. - Reviews bring individual capability to light.
  51. 51. Test Management Test management encompass: - management of a test team - management of a test process - management of test projects A test manager’s role: - Play as a leadership role in: - planing projects - setting up a direction -motivating people - build a team - manage engineers - Play as a controller in: - product evaluation - performance evaluation - changing to a new direction - Play as a supporter in: - assist and train engineers - train engineers - enforce and control test methods, standards, and criteria - select and develop test tools
  52. 52. Test Management Test management functions: - Management - Manage test projects - Manage team members - Manage test processes - Motivation - Motivate quality work from team members - Simulate for new ideas and creative solutions - Methodology - Control of setting up test methodology, process, standards. - Control of establishing test criteria - Mechanization - Control the selection and development of test tools - Mechanism for the configuration management of test suites - Control of setting up an integrated test environment - Measurement - Measure test cost, complexity and efforts - Measure engineer performance - Measure test effectiveness - Measure product quality
  53. 53. Test Management Test management encompass: - management of a test team - management of a test process - management of test projects A test manager’s role: - Play as a leadership role in: - planing projects - setting up a direction -motivating people - build a team - manage engineers - Play as a controller in: - product evaluation - performance evaluation - changing to a new direction - Play as a supporter in: - assist and train engineers - train engineers - enforce and control test methods, standards, and criteria - select and develop test tools
  54. 54. Test Management Questions for managers who claim to be managing software testing: Management: Do you plans address testing? Do you know who is responsible? Have you published your testing policy? Motivation: Do you provide incentive for people do quality work? Do you encourage people to take advantage of training opportunities in testing methods? Methodology: Are your testing methods proceduralized and are people trained in their use? Are you aware of new testing techniques and are you working to introduce them?
  55. 55. Test Management Questions for managers who claim to be managing software testing: Mechanization: Do you sufficient hardware and equipment to support testing? Have you provided appropriate software testing tools and aids? Do you evaluate automated testing aids on an ongoing basis? Measurement: Do you track errors, faults, and failures? Do you know what testing costs? Do you quantitatively measure testing performance?
  56. 56. Test Engineers’ Tasks What does a test engineer do? - Ensure that testing is performed - Ensure that testing is documented - Ensure that testing methodology, techniques, standards are established and developed The basic skills for a qualified test engineer: Controlled - Organized individual, systematic planing - Good planing on testing Competent - Technical awareness of testing methods, tools, and criteria Critical - Inner determination to discover problems Comprehensive - Total understanding of the given system and specifications - Pay good attention to the details Considerate - Ability to related to others and resolve conflicts
  57. 57. Basic Test Engineers’ Tasks The basic tasks of a test engineer include: - Prepare testing plans and organize testing activities - Design test cases and document them using well-defined test methods - Develop test procedures and test data - Write problem reports and text execution records - Use testing tools and aids - Review test designs, results and problems - Test maintenance changes - Oversee acceptance tests
  58. 58. Test Automation and Tools What is software test automation? Software test automation refers to a process and activities that reduce manual testing activities and efforts in a software development lifecycle. Why software test automation? - Reduce software testing time, cost, and efforts - Increase the quality of software testing - Apply well-defined test methods through tools - Relieve the complicated and redundant work from test engineers What do we need to automate software testing? - Limited cost and schedule - Select and introduce quality test tools - Develop necessary effective and efficient test tools and facility - Apply well-defined testing methods and coverage - Form an integrated test environment supporting various software testing activities for products on a production line
  59. 59. Test Automation and Tools Basic steps for test automation: Level #1: Automatic test management - test case & suite management, and documentation - test script management Level #2: Automatic test execution - black-box test control and execution - white-box test control and execution Level #3: Automatic test generation - black-box test generation - white-box test generation Level #4: Automatic test measurement - test coverage and metrics measurement - test cost and complexity measurement
  60. 60. Test Automation and Tools Classification of software test tools: - Program-based test tools (or white-box test tools) - Program test metrics tools - Test generation tools, such as random test tools - Program specification-based test tools - Program regression test tools (such as program change analysis tool ) Protocol-based conformance tools - Program performance test tools - performance monitoring tools - performance evaluation tools - Program load test tools - GUI record and replay tools - Test management and configuration management tools