• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
08 fse verification
 

08 fse verification

on

  • 145 views

 

Statistics

Views

Total Views
145
Views on SlideShare
145
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    08 fse verification 08 fse verification Presentation Transcript

    • B. Computer Sci. (SE) (Hons.) CSEB233: Fundamentals of Software Engineering Software Verification and Validation
    • Objectives    Discuss the fundamental concepts of software verification and validation Conduct software testing and determine when to stop Describe several types of testing: unit testing,  integration testing,  validation testing, and  system testing    Produce standard for software test documentation Use a set of techniques for the creation of test cases that meet overall testing objectives and the testing strategies
    • Software Verification & Validation Fundamental Concepts
    • Verification & Validation (1) V & V must be applied at each framework activity in the software process  Verification refers to the set of tasks that ensure that software correctly implements a specific function  Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements  Boehm states this another way:   Verification: "Are we building the product right?"  Validation: "Are we building the right product?”
    • Verification & Validation (2)  V&V have two principal objectives:  Discover defects in a system  Assess whether or not the system is useful and useable in an operational situation  V&V should establish confidence that the software is fit for purpose  This does NOT mean completely free of defects  Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed
    • Verification & Validation (3) • V & V (SQA) activities include:  SQA activities:         Technical reviews Quality and configuration audits Performance monitoring Simulation Feasibility study Documentation review Database review Algorithm analysis  Testing activities:     Development testing Qualification testing Acceptance testing Installation testing
    • Software Verification & Validation Software Testing
    • Software Testing    The process of exercising a program with the specific intent of finding errors prior to delivery to the end user Must be planned carefully to avoid wasting development time and resources, and conducted systematically What testing shows?
    • Who Tests the Software? (1)  Developer  Understands system but, will test "gently“  Driven by "delivery"  Independent Tester  Must learn about the system,  Will attempt to break it  Driven by quality
    • Who Tests the Software? (2)  Misconceptions:  The developer should do no testing at all  Software should be “tossed over the wall” to stranger who will test it mercilessly  Testers are not involved with the project until it is time for it to be tested
    • Who Tests the Software? (3)  The developer and Independent Test Group (ITG) must work together throughout the software project to ensure that thorough tests will be conducted  An ITG does not have the “conflict of interest” that the software developer might experience  While testing is conducted, the developer must be available to correct errors that are uncovered
    • Testing Strategy (1) Identifies steps to be undertaken; when these steps are undertaken; how much effort; time; and resources required.  Any testing strategy must incorporate:   Test planning  Test case design  Test execution  Resultant data collection and evaluation  Should provide guidance for the practitioners and a set of milestones for the manager
    • Testing Strategy (2)  Characteristics of software testing strategies proposed in the literature:  To perform effective testing, you should conduct effective technical reviews.  By doing this, many errors will be eliminated before testing commences.  Testing begins at the component level and works “outward” toward the integration of the entire computerbased system
    • Testing Strategy (3)  Different testing techniques are appropriate for different software engineering approaches and at different points in time.  Testing is conducted by the developer of the software and (for large projects) an independent test group.  Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.
    • Overall Software Testing Strategy Maybe viewed in the context of the spiral  Begins by „testing-in-the-small‟ and move toward „testing-in-the-large‟ 
    • Overall Software Testing Strategy  Unit Testing  focuses on each unit of the software (e.g., component, module, class) as implemented in source code  Integration Testing  focuses on issues associated with verification and program construction as components begin interacting with one another
    • Overall Software Testing Strategy  Validation Testing  provides assurance that the software validation criteria (established during requirements analysis) meets all functional, behavioral, and performance requirements  System Testing  verifies that all system elements mesh properly and that overall system function and performance has been achieved
    • When to Stop Testing?  Testing is potentially endless  We cannot test until all the defects are unearthed and removed – which is impossible  At some point, we have to stop testing and ship the software  The question is, When? Realistically, testing is budget, time and quality  It is driven by profit models  a trade-off between (Pan, 1999)
    • When to Stop Testing? The pessimistic, and unfortunately most often used approach is to stop testing whenever some, or any of the allocated resources - time, budget, or test cases - are exhausted  The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost 
    • Software Verification & Validation Types of Test
    • Unit Testing  Focuses on assessing:  internal processing logic and data structures within the boundaries of a component (module).  proper information flow of module interfaces.  local data to ensure that integrity is maintained.  boundary conditions.  basis (independent) path.  all error handling paths.  If resources are scarce to do comprehensive unit testing, select critical or complex modules and unit test these only
    • Unit Testing
    • Integration Testing After unit testing of individual modules, they are combined together into a system  Question commonly asked once all modules have been unit tested:   “If they work individually, why do you doubt that they‟ll work when we put them together?”  The problem is “putting them together” – interfacing  Data can be lost across an interface  Global data structures can present problems  Subfunctions, when combined, may not produce the desired function
    • Integration Testing  Incremental integration testing strategies:  Bottom-up integration  Top-down integration  Regression testing  Smoke testing
    • Bottom-up Integration  An approach where the lowest level modules are tested first, then used to facilitate the testing of higher level modules  The process is repeated until the module at the top of the hierarchy is tested  Top level modules are the most important yet tested last  Is helpful only when all or most of the modules of the same development level are ready
    • Bottom-up Integration The steps:  Test D, E individually  Using a dummy program - „Driver‟  Low-level components are combined into clusters that perform a specific software function.    Test C such that it call D/E - If an error occurs we know that the problem is in C or in the interface between C and D/E The cluster is tested Drivers are removed and clusters are combined moving upward in the program structure
    • Top-down Integration The steps:  Main/top module used as a test driver and stubs are substitutes for modules directly subordinate to it.  Subordinate stubs are replaced one at a time with real modules (following the depth-first or breadth-first approach).  Tests are conducted as each module is integrated.  On completion of each set of tests and other stub is replaced with a real module.  Regression testing may be used to ensure that new errors not introduced.  The process continues from 2nd step until the entire program structure is built
    • Top-down Integration Example steps:  Test A individually (use stubs for other modules)  Depending on the integration approach selected, subordinate stubs are replaced one at a time with actual components   Test A such that it calls B (use stub for other modules)   In a „depth-first‟ structure: If an error occurs we know that the problem is in B or in the interface between A and B Replace stubs one at a time, „depth-first‟ and re-run tests
    • Regression Testing (1)  Focuses on retesting after changes are made  Whenever software is corrected, some aspects of the software configuration is changed  e.g., the program, its documentation, or the data that support it  Regression testing helps to ensure that changes - due to testing or for other reasons - do not introduce unintended behavior or additional errors
    • Regression Testing (2) In traditional regression testing, we reuse the same tests  In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests  Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools 
    • Smoke Testing (1) A common approach for creating “daily builds” for product software  Software components that have been translated into code are integrated into a “build”  A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions  A series of tests is designed to expose errors that will keep the build from properly performing its function 
    • Smoke Testing (2) The intent should be to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule  The build is integrated with other builds and the entire product (in its current form) is smoke tested daily  The integration approach may be top down or bottom up 
    • Validation Testing (1) Focuses on uncovering errors at the software requirements level.  SRS might contain a „Validation Criteria‟ that forms the basis for a validation-testing approach 
    • Validation Testing (2)  Validation-Test Criteria:  all functional requirements are satisfied  all behavior characteristics are achieved  all content is accurate and properly presented  all performance requirements attained, documentation is correct, and  usability and other requirements are met are
    • Validation Testing (3)  An important element of the validation process is a configuration review/audit  Ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary detail to strengthen the support activities.
    • Validation Testing (4)  A series of acceptance tests are conducted to enable the customer to validate all requirements  To make sure the software works correctly for intended user in his or her normal work environment  Alpha test  Version of the complete software is tested by customer under the supervision of the developer at the developer‟s site  Beta  test Version of the complete software is tested by customer at his or her own site without the developer being present
    • System Testing (1) A series of different tests to verify that system elements have been properly integrated and perform allocated functions.  Types of system tests:   Recovery Testing  Security Testing  Stress Testing  Performance Testing  Deployment Testing
    • System Testing (2)  Recovery Testing  forces the software to fail in a variety of ways and verifies that recovery is properly performed  Security Testing  verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration  Stress Testing  executes a system in a manner that demands resources in abnormal quantity, frequency, or volume
    • System Testing (3)  Performance Testing  test the run-time performance of software within the context of an integrated system  Deployment Testing  examines all installation procedures and specialized installation software that will be used by customers  all documentation that will be used to introduce the software to end users
    • Software Verification & Validation Software Test Documentation
    • Software Test Documentation (1) IEEE 829 2008 Standard for Software Test Documentation  IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing   The documents are:  Test Plan  Test Design Specification  Test Case Specification  Test Procedure Specification  Test Item Transmittal Report  Test Log  Test Incident Report  Test Summary Report
    • Software Test Documentation (2)  Test Plan - A management planning document that shows:  How  the testing will be done including System Under Test (SUT) configurations.  Who will do it  What will be tested  How long it will take - may vary, depending upon resource availability  What the test coverage will be, i.e. what quality level is required
    • Software Test Documentation (3)  Test Design Specification:  detailing test conditions and the expected results as well as test pass criteria.  Test Procedure Specification:  detailing how to run each test, including any set-up preconditions and the steps that need to be followed
    • Software Test Documentation (4)  Test Item Transmittal Report:   reporting on when tested software components have progressed from one stage of testing to the next Test Log:  recording which tests cases were run, who ran them, in what order, and whether each test passed or failed  Test Incident Report:  detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed.
    • Software Test Documentation (5)  Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports  The report also records what testing was done and how long it took, in order to improve any future test planning  This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by the project stakeholders
    • Software Verification & Validation Creating Test Cases
    • Test-case Design (1) Focuses on a set of techniques for the creation of test cases that meet overall testing objectives and the testing strategies  These techniques provide a systematic guidance for designing tests that   Exercise the internal logic and interfaces of every software component/module  Exercise the input and output domains of the program to uncover errors in program function, behaviour, and performance
    • Test-case Design (2) • For conventional application, software is tested from two perspectives:  White-box‟ testing    Focus on the program control structure (internal program logic) Test cases are derived to ensure that all statements in the program have been executed at least once during testing and all logical conditions have been exercised Performed early in the testing process  „Black-box‟ testing Examines some fundamental aspect of a system with little regard for the internal logical structure of the software  Performed during later stages of testing 
    • White-box Testing (1)  Using white-box testing method, you may derive testcases that:  Guarantee that al independent paths within a module have been exercised at least once  Exercise all logical decisions on their true and false sides  Execute all loops at their boundaries and within their operational bounds  Exercise internal data structures to ensure their validity  Example method: basis path testing
    • White-box Testing (2)  Basis path testing:  Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least once during testing
    • Deriving Test Cases (1)  Steps to derive the test cases by applying the basis path testing method:  Using the design or code, draw a corresponding flow graph. The flow graph depicts logical control flow using the notation illustrated in next slide.  Refer Figure 18.2 in page 486 - comparison between a flowchart and a flow graph   Calculate the Cyclometic Complexity V(G) of the flow graph  Determine a basis set of independent paths  Prepare test cases that will force execution of each path in the basis set
    • Deriving Test Cases (2)  Flow graph notation: UNTIL Sequence IF WHILE CASE
    • Drawing Flow Graph: Example void foo (float y, float a *, int n) { float x = sin (y) ; if (x > 0.01) 1 z = tan (x) ; 2 else z = cos (x) ; 3 for (int i = 0 ; i < x ; + + i) 5 { a[i] = a[i] * z ; 6 Cout < < a [i]; 7 } } 8 Predicate nodes 1 2 Predicate nodes R1 4 5 3 R3 R2 6 8 7
    • Deriving Test Cases (3)   The arrows on the flow graph, called edges or links, represent flow of control and are analogous to flowchart arrows Area bounded by edges and nodes are called regions  When counting regions, we include the area outside the graph as region
    • Deriving Test Cases: Example Step 1: Draw a flow graph
    • Deriving Test Cases: Example Step 2: Calculate the Cyclomatic complexity, V(G)  Cyclomatic complexity can be used to count the minimum number of independent paths.  A number of industry studies have indicated that the higher V(G), the higher the probability or errors.  The SEI provides the following basic risk assessment based on the value of code: Cyclomatic Complexity Risk Evaluation 1 to 10 A simple program, without very much risk 11 to 20 A more complex program, moderate risk 21 to 50 A complex, high risk program > 50 An un-testable program (very high risk)
    • Deriving Test Cases: Example  Ways to calculate V(G):  V(G) = the number of regions of the flow graph.  V(G) = E – N + 2 ( Where “E” are edges & “N” are nodes)  V(G) = P + 1 (Where P is the predicate nodes in the flow graph, each node that contain a condition)  Example:  V(G) = Number of regions = 4  V(G) = E – N + 2 = 16 – 14 + 2 = 4  V(G) = P + 1 = 3 + 1 = 4
    • Deriving Test Cases: Example 1         Step 3: Determine a basis set of independent paths Path 1: 1, 2, 3, 4, 5, 6, 7, 8, 12 Path 2: 1, 2, 3, 12 Path 3: 1, 2, 3, 4, 5, 9, 10, 3, … Path 4: 1, 2, 3, 4, 5, 9, 11, 3, … Step 4: Prepare test cases Test cases should be derived so that all of these paths are executed A dynamic program analyser may be used to check that paths have been executed
    • Summary (1) Software testing plays an extremely important role in V&V, but many other SQA activities are also necessary  Testing must be planned carefully to avoid wasting development time and resources, and conducted systematically  The developer and ITG must work together throughout the software project to ensure that thorough tests will be conducted 
    • Summary (2) The software testing strategy is to begins by „testingin-the-small‟ and move toward „testing-in-the-large‟  The IEEE 829.2009 standard specifies a set of documents for use in eight defined stages of software testing  The „white-box‟ and „black-box‟ techniques provide a systematic guidance for designing test cases  We need to know when is the right time to stop testing 
    • THE END Copyright © 2013 Mohd. Sharifuddin Ahmad, PhD College of Information Technology