0
Lecture 11 Testing, Verification, Validation and Certification CS 540 – Quantitative Software Engineering You can’t test i...
Software Quality vs. Software Testing <ul><li>Software Quality Management (SQM) refers to processes designed to engineer v...
SWEBOK Software Testing <ul><li>“ Software testing consists of dynamic verification of the behavior of a program on a fini...
SWEBOK Software Testing Topics <ul><li>Fundamentals:  </li></ul><ul><ul><li>Definitions, standards, terminology, etc. </li...
SWEBOK Software Testing Topics <ul><li>Test Techniques: </li></ul><ul><ul><li>Ad-hoc  </li></ul></ul><ul><ul><li>Explorato...
SWEBOK Software Testing Topics <ul><li>Test Effectiveness Metrics </li></ul><ul><ul><li>Fault types and categorization </l...
Testing Metrics <ul><li>Test Case Execution Metrics </li></ul><ul><ul><li>Percent Planned, Executed, Passed </li></ul></ul...
Software Testing Axioms <ul><li>Dijkstra “Testing can show the presence of bugs but not their absence!” </li></ul><ul><li>...
Software Quality and Testing Axioms <ul><li>It is impossible to completely test software. </li></ul><ul><li>Software testi...
Types of Tests <ul><li>Unit </li></ul><ul><li>Interface </li></ul><ul><li>Integration </li></ul><ul><li>System </li></ul><...
When to Test <ul><li>Boehm- errors discovered in the operational phase incur cost 10 to 90 times higher than design phase ...
Testing Approaches <ul><li>Coverage based -  all statements must be executed at least once </li></ul><ul><li>Fault based- ...
Testing Vocabulary <ul><li>Error - human action producing incorrect result </li></ul><ul><li>Fault is a manifestation of a...
IEEE 829 IEEE Standard for Software Test Documentation <ul><li>Test Case Specification  </li></ul><ul><li>Test suite </li>...
Test Process Program or Doc input Test  strategy Prototype  Or model Subset of input Subset of input Execute Expected outp...
Fault Detection vs. Confidence Building <ul><li>Testing provokes failure behavior - a good strategy for fault detection bu...
Cleanroom  <ul><li>Developer does not execute code - convinced of correctness through static analysis </li></ul><ul><li>Mo...
Testing requirements <ul><li>Review or inspection to check that all aspects of the system have been described </li></ul><u...
Boehm’s specification criteria <ul><li>Completeness- all components present and described completely - nothing pending </l...
Traceability Tables <ul><li>Features - requirements relate to observable system/product features </li></ul><ul><li>Source ...
Traceability Table: Pressman REQUIREMENTS SUBSYSTEM X R03… X X R02 X R01 S03… S02 S01
Maintenance Testing <ul><li>More than 50% of the project life is spent in maintenance </li></ul><ul><li>Modifications indu...
V&V planning and documentation <ul><li>IEEE 1012 specifies what should be in Test Plan </li></ul><ul><li>Test Design Docum...
IEEE 1012 <ul><li>Purpose </li></ul><ul><li>Referenced Documents </li></ul><ul><li>Definitions </li></ul><ul><li>V&V overv...
Human static testing <ul><li>Reading - peer reviews (best and worst technique) </li></ul><ul><li>Walkthroughs and Inspecti...
Inspections <ul><li>Sometimes referred to as Fagan inspections </li></ul><ul><li>Basically a team of about 4 folks examine...
Walk throughs <ul><li>Guided reading of code using test data to run a “simulation” </li></ul><ul><li>Generally less formal...
The Value of Inspections/Walk-Thoughs (Humphrey 1989) <ul><li>Inspections can be 20 times more efficient than testing. </l...
SAAM <ul><li>Software Architecture Analysis Method </li></ul><ul><li>Scenarios that describe both current and future behav...
Coverage based Techniques (unit testing) <ul><li>Adequacy of testing based on coverage, percent statements executed, perce...
Coverage Based Techniques -2 <ul><li>Data Flow Coverage - considers definitions and use of variables </li></ul><ul><ul><li...
Requirements coverage  <ul><li>Transform the requirements into a graph </li></ul><ul><ul><li>nodes denoting elementary req...
Fault Seeding to estimate faults in a program <ul><li>Artificially seed faults, test to discover both seeded and new fault...
Orthogonal Array Testing <ul><li>Intelligent selection of test cases </li></ul><ul><li>Fault model being tested is that si...
Top-down and Bottom-up Humphrey, 1989 Test stubs are needed  The extended early phases dictate a slow staff buildup  Error...
Some Specialized Tests <ul><li>Testing GUIs </li></ul><ul><li>Testing with Client/Server architectures </li></ul><ul><li>T...
Software Testing Footprint Time Tests Completed Planned Rejection point Tests run successfully Poor Module Quality
Test Status
Customer Interests I N S T A L L A T I O N <ul><li>Before </li></ul><ul><li>Features </li></ul><ul><li>Price </li></ul><ul...
  Why bad things happen to good systems <ul><li>Customer buys  </li></ul><ul><li>off-the-shelf </li></ul><ul><li>System wo...
Mindset <ul><li>Move from a culture of minimal change to one of  maximal change.  </li></ul><ul><li>Move to &quot;make it ...
Productivity <ul><li>Productivity =  </li></ul><ul><li>F  {people,  </li></ul><ul><li>system  nature, </li></ul><ul><li>  ...
Software Testing Summary <ul><li>Software testing Body of Knowledge very advanced (in terms of standards, literature, etc....
Upcoming SlideShare
Loading in...5
×

Lecture 11 : Testing, Verification, Validation

1,449

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,449
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
95
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • As I said in the initial slides testing is key but you cannot rely on testing --- quality cannot be tested in! As we discussed last week, Software Quality continues to affect an increasing percentage of our lives and this makes it more important to insure that the software performs correctly. Dijkstra’s quote states the obvious and underlines the problem -- we will never know that we are bug free! Although being a great tester is difficult and is a true skill comparable to being a great software designer, software testers are not rewarded as well as designers in most corporations. Hopefully that will change, especially in a world where much of the work is integration and the tester plays a crucial role.
  • As I said in the initial slides testing is key but you cannot rely on testing --- quality cannot be tested in! As we discussed last week, Software Quality continues to affect an increasing percentage of our lives and this makes it more important to insure that the software performs correctly. Dijkstra’s quote states the obvious and underlines the problem -- we will never know that we are bug free! Although being a great tester is difficult and is a true skill comparable to being a great software designer, software testers are not rewarded as well as designers in most corporations. Hopefully that will change, especially in a world where much of the work is integration and the tester plays a crucial role.
  • .
  • This slide details the important role of the testers in the early stages of a project … when requirements are being gathered. One sure way to produce requirements that satisfy Boehm’s criteria is to bring the testers in … the constraints that testing requires insures crisper requirements. As requirements mature, in an ideal world, so should test plans, schemes and documentation.
  • Testing involvement should begin as early in the project as is possible. We will see later in the course that Extreme Programming takes this to the extreme … they create the tests and test before they create the code! Numerous studies have shown (and common sense dictates) that the later you catch an error, the more costly it is. With appropriate techniques you can employ testing at the earliest stages of the product. Some of the techniques have been covered in previous lectures including prototyping, story board and demos. Each provide a test of some aspect of the spec, arch design. A key is WHY are you testing … for confidence in the software or for finding errors -- as we will see this creates a tension between the user who wants confidence (that the code has no errors) and the developer/manager who wants to find the errors.
  • There are many types of testing. We will cover each of these in some detail during this class. White box and black box testing are classic terms.
  • As in all things we have to tune our vocabulary a bit to insure that we all agree about terms. The fourth bullet is a way to understand how the terminology fits together, using my error prone self as an example. Again we tune the terms verification -- did we build the system right, and validation -- have we built the right system.
  • This figure taken from van Vliet places the test process in context. The input is a program or document (P or D), we develop a test strategy, we provide a subset of the input to either the program or the document and also to an oracle that is the reference for the correct or “expected” output and compare that with the real output generating results.
  • Basically testers look for faults, users look for flawless behavior. This creates some potentially interesting situations. With the testers being ecstatic that they are exercising the system and finding bugs while the user simply wants the system to work correctly.
  • One technique (and we will really not spend much time on it) is to rigorously approach the specification, design and development of code through formal methods and tools. In this scheme the testers play a significant role, first in understanding what are the high probability scenarios (probability distribution) and then testing them and continuing to iterate until a given level of reliability is achieved.
  • This slide details the important role of the testers in the early stages of a project … when requirements are being gathered. One sure way to produce requirements that satisfy Boehm’s criteria is to bring the testers in … the constraints that testing requires insures crisper requirements. As requirements mature, in an ideal world, so should test plans, schemes and documentation.
  • This slide provides a bit more information on Boehm’s criteria for requirements. Note the emphasis on traceability … a given requirement should be traceable to the spec, the design and eventually the code. ICED-T as you recall is Intuitive, Consistent, Efficient, Durable and Thoughtful
  • Although not always followed, a discipline of traceability tables provides a firm foundation for the ongoing progress of the project. As you can see (on the next slide) there is a fair amount of detail that is woven into these tables.
  • This is a table of requirements related to subsystems -- for example requirement R02 depends on subsystems S01 and S03, thereby linking implementation to requirements. I derived this table from a Software Engineering book by Pressman (which is not a bad book and was a candidate for this course.. R.S. Pressman, Software Engineering a Practitioner’s Approach , McGraw-Hill, 5th edition, 2001, ISBN:0-07-365578-3.)
  • 50% of total system lifetime spent in maintenance is an underestimate … especially in this economy where corporations squeeze as much use as possible out of older systems. The key in maintenance (including anti-regressive maintenance) is to have an active regression testing programs that insures fixes do not introduce new bugs and also expands the test to account for newly discovered flaws. As computer cycles get cheaper retesting all the code versus only doing selective retest becomes an easier decision, except for times when you need to interact or simulate a highly distributed framework of systems.
  • Essentially this slide illustrates a roadmap assembled into an IEEE spec that details what is needed for great test planning. Note that some of these documents can be combined for smaller projects. An even more extensive testing scheme is involved if you are dealing with safety critical software such as avionics and is usually prescribed by external agencies.
  • An outline of the 1012 spec.
  • This slides lists some manual test techniques -- we will only focus on a few of them and these are marked with asterisks on this slide.
  • This slide is fairly descriptive. Note that the code author is silent and that faults are categorized. Once the code is inspected the moderator is responsible for insuring that the author corrects the code. One key to making this scheme successful is a constructive attitude -- the inspections are not a device for grading the developer but a device to improve the product and perhaps a learning opportunity for the developer (and inspectors and moderators).
  • In general walkthroughs are more informal (although Parnas suggests that more effective walkthroughs result when participants are assigned roles). I have used this (w/o Parnas roles) extensively and found them to be really useful and a learning experience. On larger projects it is more difficult to do and more formal procedures should be used. I have participated in inspections and found them to be effective, but unlike many, I insist that the code is subjected to automatic techniques (such as compilation) first.
  • These statistics from Humphrey are stunning! If you are not using some sort of Walk Through technique or inspections in your development project, you are losing out on an excellent chance to improve your system!
  • SAAM is useful for “software with a future” code that you are sure will be around and will evolve. In the next few classes, and for the rest of your software engineering career, the concepts of cohesion and coupling are crucial and we will return to them several times. Even if you are not inclined to use SAAM the act of detailing and using scenarios is valuable and contributes to a high quality product with high customer satisfaction (that is if you include the customer in your scenario definition!).
  • Coverage based techniques should be a component of any testing plan. Their implementation is often fairly involved. There are a variety of approaches to coverage testing some of which are listed here. You can find more in any testing book. These are usually best done during unit testing- although the testing team may spot check for compliance. Unit testing is usually done by the developer of a module.
  • Data flow coverage can become very involved, especially as you iterate through all possible definitions.
  • The same techniques can apply to requirements -- then you can determine whether you are testing the system for its coverage of the requirements.
  • This slide goes into detail about fault seeding .. Two methods: (1) manually seeding (creating faults) or (2)have one group of testers (A) consider the faults found by a separate group of testers (B)as the “seeded” faults the A group should find. Note the last two bullets -- basically finding lots of errors in a module does not mean that the job is done, on the contrary ,it implies that much more work needs to be done on that module.
  • Orthogonal array testing is described in B&amp;Y . It is an excellent way of reducing the number of tests you need to run yet assuring comprehensive testing in certain dimensions.
  • This table is a nice comparison of Top-down and Bottom-up testing. The key is to attempt to do them both -- One can never test enough!
  • These test programs are particularly relevant for systems with special emphasis. Acceptance tests are tests done by the purchaser to certify that what they expected to buy is what they received and it is in working order.
  • This slide details the important role of the testers in the early stages of a project … when requirements are being gathered. One sure way to produce requirements that satisfy Boehm’s criteria is to bring the testers in … the constraints that testing requires insures crisper requirements. As requirements mature, in an ideal world, so should test plans, schemes and documentation.
  • Transcript of "Lecture 11 : Testing, Verification, Validation "

    1. 1. Lecture 11 Testing, Verification, Validation and Certification CS 540 – Quantitative Software Engineering You can’t test in quality Independent system testers
    2. 2. Software Quality vs. Software Testing <ul><li>Software Quality Management (SQM) refers to processes designed to engineer value, functional conformance, and minimize faults, failures, and defects </li></ul><ul><ul><li>Includes processes throughout the software life cycle (inspections, reviews, audits, validations, etc. </li></ul></ul><ul><li>Software testing is an activity performed for evaluating quality (and improving it) by identifying defects and problems (SWEBOK) </li></ul>
    3. 3. SWEBOK Software Testing <ul><li>“ Software testing consists of dynamic verification of the behavior of a program on a finite set of test cases, suitably selected from the usually infinite executions domain, against the expected behavior” </li></ul><ul><ul><li>Dynamic (means software execution vs. static inspections, reviews, etc.) </li></ul></ul><ul><ul><li>Finite (Trade-off of resources) </li></ul></ul><ul><ul><li>Selected: Techniques vary on how they select tests (purpose) </li></ul></ul><ul><ul><li>Expected behavior: functional and operational </li></ul></ul>
    4. 4. SWEBOK Software Testing Topics <ul><li>Fundamentals: </li></ul><ul><ul><li>Definitions, standards, terminology, etc. </li></ul></ul><ul><ul><li>Keys issues: looking for defects vs. verify and validate </li></ul></ul><ul><li>Test levels: </li></ul><ul><ul><li>Unit test  Beta test </li></ul></ul><ul><ul><li>Objectives: conformance, functional, acceptance, installation, performance/stress, reliability, stress, usability, etc </li></ul></ul><ul><li>Test Techniques: </li></ul><ul><ul><li>Ad-hoc, exploratory, specification-based, boundary-value </li></ul></ul>
    5. 5. SWEBOK Software Testing Topics <ul><li>Test Techniques: </li></ul><ul><ul><li>Ad-hoc </li></ul></ul><ul><ul><li>Exploratory </li></ul></ul><ul><ul><li>Specification-based </li></ul></ul><ul><ul><li>Boundary-value analysis </li></ul></ul><ul><ul><li>Decision table </li></ul></ul><ul><ul><li>Finite state/Model </li></ul></ul><ul><ul><li>Random Generation </li></ul></ul><ul><ul><li>Code based (control flow vs. data flow) </li></ul></ul><ul><ul><li>Application/Technology based: GUI, OO, Protocol, safety, certification, </li></ul></ul>
    6. 6. SWEBOK Software Testing Topics <ul><li>Test Effectiveness Metrics </li></ul><ul><ul><li>Fault types and categorization </li></ul></ul><ul><ul><li>Fault density </li></ul></ul><ul><ul><li>Statistical estimates of find/fix rates </li></ul></ul><ul><ul><li>Reliability modeling (failure occurrences) </li></ul></ul><ul><ul><li>Coverage measures </li></ul></ul><ul><ul><li>Fault seeding </li></ul></ul>
    7. 7. Testing Metrics <ul><li>Test Case Execution Metrics </li></ul><ul><ul><li>Percent Planned, Executed, Passed </li></ul></ul><ul><li>Defect Rates </li></ul><ul><ul><li>Defect rates based on NCLOC </li></ul></ul><ul><ul><li>Predicted defect detection (upper/lower control limits) </li></ul></ul><ul><ul><li>Fault Density/fault criticality (software control board) </li></ul></ul><ul><li>Fault types, classification, and root cause analysis </li></ul><ul><li>Fault on fault, breakage, regression test failures </li></ul><ul><li>Reliability, Performance Impact </li></ul><ul><li>Field faults/ Prediction of deficiencies </li></ul>
    8. 8. Software Testing Axioms <ul><li>Dijkstra “Testing can show the presence of bugs but not their absence!” </li></ul><ul><li>Independent testing is a necessary but not sufficient condition for trustworthiness. </li></ul><ul><li>Good testing is hard and occupies 20% of the schedule </li></ul><ul><li>Poor testing can dominate 40% of the schedule </li></ul><ul><li>Test to assure confidence in operation; not to find bugs </li></ul>
    9. 9. Software Quality and Testing Axioms <ul><li>It is impossible to completely test software. </li></ul><ul><li>Software testing is a risk based exercise. </li></ul><ul><li>All software contains faults and defects. </li></ul><ul><li>The more bugs you find, the more there are. </li></ul><ul><li>“ A relatively small number of causes will typically produce a large majority of the problems or defects (80/20 Rule).” --Pareto principle </li></ul>
    10. 10. Types of Tests <ul><li>Unit </li></ul><ul><li>Interface </li></ul><ul><li>Integration </li></ul><ul><li>System </li></ul><ul><li>Scenario </li></ul><ul><li>Reliability </li></ul><ul><li>Stress </li></ul><ul><li>Verification </li></ul><ul><li>Validation </li></ul><ul><li>Certification </li></ul>
    11. 11. When to Test <ul><li>Boehm- errors discovered in the operational phase incur cost 10 to 90 times higher than design phase </li></ul><ul><ul><li>Over 60% of the errors were introduced during design </li></ul></ul><ul><ul><li>2/3’s of these not discovered until operations </li></ul></ul><ul><li>Test requirements specifications, architectures and designs </li></ul>
    12. 12. Testing Approaches <ul><li>Coverage based - all statements must be executed at least once </li></ul><ul><li>Fault based- detect faults, artificially seed and determine whether tests get at least X% of the faults </li></ul><ul><li>Error based - focus on typical errors such as boundary values (off by 1) or max elements in list </li></ul><ul><li>Black box - function, specification based,test cases derived from specification </li></ul><ul><li>White box - structure, program based, testing considering internal logical structure of the software </li></ul><ul><li>Stress Based – no load, impulse, uniform, linear growth, exponential growth by 2’s. </li></ul>
    13. 13. Testing Vocabulary <ul><li>Error - human action producing incorrect result </li></ul><ul><li>Fault is a manifestation of an error in the code </li></ul><ul><li>Failure – a system anomaly, executing a fault induces a failure </li></ul><ul><li>Verification “The process of evaluating a system or component to determine whether the products of a given development phase satisfy conditions imposed at the start of the phase” e.g., ensure software correctly implements a certain function- have we built the system right </li></ul><ul><li>Validation “The process of evaluating a system or component during or at the end of development process to determine whether it satisfies specified requirements” </li></ul><ul><li>Certification “The process of assuring that the solution solves the problem. </li></ul>
    14. 14. IEEE 829 IEEE Standard for Software Test Documentation <ul><li>Test Case Specification </li></ul><ul><li>Test suite </li></ul><ul><li>Test Scripts </li></ul><ul><li>Test Scenarios </li></ul><ul><li>Test Plans </li></ul><ul><li>Test Logs </li></ul><ul><li>Test Incident Report </li></ul><ul><li>Test Item Transmittal Report </li></ul><ul><li>Test Summary Report </li></ul>
    15. 15. Test Process Program or Doc input Test strategy Prototype Or model Subset of input Subset of input Execute Expected output Acutal output compare Test results
    16. 16. Fault Detection vs. Confidence Building <ul><li>Testing provokes failure behavior - a good strategy for fault detection but does not inspire confidence </li></ul><ul><li>User wants failure free behavior - high reliability </li></ul><ul><li>Automatic recovery minimizes user doubts. </li></ul><ul><li>Test team results can demoralize end users, so report only those impacting them. </li></ul><ul><li>A project with no problems is in deep trouble. </li></ul>
    17. 17. Cleanroom <ul><li>Developer does not execute code - convinced of correctness through static analysis </li></ul><ul><li>Modules are integrated and tested by independent testers using traffic based input profiles. </li></ul><ul><li>Goal: Achieve a given reliability level considering expected use. </li></ul>
    18. 18. Testing requirements <ul><li>Review or inspection to check that all aspects of the system have been described </li></ul><ul><ul><li>Scenarios with prospective users resulting in functional tests </li></ul></ul><ul><li>Common errors in a specification: </li></ul><ul><ul><li>Missing information </li></ul></ul><ul><ul><li>Wrong information </li></ul></ul><ul><ul><li>Extra information </li></ul></ul>
    19. 19. Boehm’s specification criteria <ul><li>Completeness- all components present and described completely - nothing pending </li></ul><ul><li>Consistent- components do not conflict and specification does not conflict with external specifications --internal and external consistency. Each component must be traceable </li></ul><ul><li>Feasibility- benefits must outweigh cost, risk analysis (safety-robotics) </li></ul><ul><li>Testable - the system does what’s described </li></ul><ul><li>Roots of ICED-T </li></ul>
    20. 20. Traceability Tables <ul><li>Features - requirements relate to observable system/product features </li></ul><ul><li>Source - source for each requirement </li></ul><ul><li>Dependency - relation of requirements to each other </li></ul><ul><li>Subsystem - requirements by subsystem </li></ul><ul><li>Interface requirements relation to internal and external interfaces </li></ul>
    21. 21. Traceability Table: Pressman REQUIREMENTS SUBSYSTEM X R03… X X R02 X R01 S03… S02 S01
    22. 22. Maintenance Testing <ul><li>More than 50% of the project life is spent in maintenance </li></ul><ul><li>Modifications induce another round of tests </li></ul><ul><li>Regression tests </li></ul><ul><ul><li>Library of previous test plus adding more (especially if the fix was for a fault not uncovered by previous tests) </li></ul></ul><ul><ul><li>Issue is whether to retest all vs selective retest, expense related decision (and state of the architecture/design related decision – when entropy sets test thoroughly!) </li></ul></ul><ul><ul><li>Cuts testing interval in half. </li></ul></ul>
    23. 23. V&V planning and documentation <ul><li>IEEE 1012 specifies what should be in Test Plan </li></ul><ul><li>Test Design Document specifies for each software feature the details of the test approach and lists the associated tests </li></ul><ul><li>Test Case Document lists inputs, expected outputs and execution conditions </li></ul><ul><li>Test Procedure Document lists the sequence of action in the testing process </li></ul><ul><li>Test Report states what happened for each test case. Sometimes these are required as part of the contract for the system delivery. </li></ul><ul><li>In small projects many of these can be combined </li></ul>
    24. 24. IEEE 1012 <ul><li>Purpose </li></ul><ul><li>Referenced Documents </li></ul><ul><li>Definitions </li></ul><ul><li>V&V overview </li></ul><ul><ul><li>Organization </li></ul></ul><ul><ul><li>Master schedule </li></ul></ul><ul><ul><li>Resources summary </li></ul></ul><ul><ul><li>Responsibilities </li></ul></ul><ul><ul><li>Tools, techniques and methodologies </li></ul></ul><ul><li>Life cycle V&V </li></ul><ul><ul><li>Management of V&V </li></ul></ul><ul><ul><li>Requirements phase V&V </li></ul></ul><ul><ul><li>Design phase V&V </li></ul></ul><ul><ul><li>Implementation V&V </li></ul></ul><ul><ul><li>Test phase V&V </li></ul></ul><ul><ul><li>Installation and checkout phase V&V </li></ul></ul><ul><ul><li>O&M V&V </li></ul></ul><ul><li>Software V&V Reporting </li></ul><ul><li>V&V admin procedures </li></ul><ul><ul><li>Anomaly reporting and resolution </li></ul></ul><ul><ul><li>Task iteration policy </li></ul></ul><ul><ul><li>Deviation policy </li></ul></ul><ul><ul><li>Control procedures </li></ul></ul><ul><ul><li>Standard practices and conventions </li></ul></ul>
    25. 25. Human static testing <ul><li>Reading - peer reviews (best and worst technique) </li></ul><ul><li>Walkthroughs and Inspections </li></ul><ul><li>Scenario Based Evaluation (SAAM) </li></ul><ul><li>Correctness Proofs </li></ul><ul><li>Stepwise Abstraction from code to spec </li></ul>
    26. 26. Inspections <ul><li>Sometimes referred to as Fagan inspections </li></ul><ul><li>Basically a team of about 4 folks examines code, statement by statement </li></ul><ul><ul><li>Code is read before meeting </li></ul></ul><ul><ul><li>Meeting is run by a moderator </li></ul></ul><ul><ul><li>2 inspectors or readers paraphrase code </li></ul></ul><ul><ul><li>Author is silent observer </li></ul></ul><ul><ul><li>Code analyzed using checklist of faults: wrongful use of data, declaration, computation, relational expressions, control flow, interfaces </li></ul></ul><ul><li>Results in problems identified that author corrects and moderator reinspects </li></ul><ul><li>Constructive attitude essential; do not use for programmer's performance reviews </li></ul>
    27. 27. Walk throughs <ul><li>Guided reading of code using test data to run a “simulation” </li></ul><ul><li>Generally less formal </li></ul><ul><li>Learning situation for new developers </li></ul><ul><li>Parnas advocates a review with specialized roles where the roles define questions asked - proven to be very effective - active reviews </li></ul><ul><li>Non-directive listening </li></ul>
    28. 28. The Value of Inspections/Walk-Thoughs (Humphrey 1989) <ul><li>Inspections can be 20 times more efficient than testing. </li></ul><ul><li>Code reading detects twice as many defects/hour as testing </li></ul><ul><li>80% of development errors were found by inspections </li></ul><ul><li>Inspections resulted in a 10x reduction in cost of finding errors </li></ul><ul><li>Beware bureaucratic code reviews drive away gurus. </li></ul>
    29. 29. SAAM <ul><li>Software Architecture Analysis Method </li></ul><ul><li>Scenarios that describe both current and future behavior </li></ul><ul><li>Classify the scenarios by whether current architecture directly (full support) or indirectly supports it </li></ul><ul><li>Develop a list of changes to architecture/high level design - if semantically different scenarios require a change in the same component, this may indicate flaws in the architecture </li></ul><ul><ul><li>Cohesion glue that keeps modules together - low=bad </li></ul></ul><ul><ul><ul><li>Functional cohesion all components contribute to the single function of that module </li></ul></ul></ul><ul><ul><ul><li>Data cohesion - encapsulate abstract data types </li></ul></ul></ul><ul><ul><li>Coupling strength of inter module connections, loosely coupled modules are easier to comprehend and adapt, low=good </li></ul></ul>
    30. 30. Coverage based Techniques (unit testing) <ul><li>Adequacy of testing based on coverage, percent statements executed, percent functional requirements tested </li></ul><ul><li>All paths coverage is an exhaustive testing of code </li></ul><ul><li>Control flow coverage: </li></ul><ul><ul><li>All nodes coverage, all statements coverage recall Cyclomatic complexity graphs </li></ul></ul><ul><ul><li>All edge coverage or branch coverage, all branches chosen at least once </li></ul></ul><ul><ul><li>Multiple condition coverage or extended branch coverage covers all combinations of elementary predicates </li></ul></ul><ul><ul><li>Cyclomatic number criterion tests all linearly independent paths </li></ul></ul>
    31. 31. Coverage Based Techniques -2 <ul><li>Data Flow Coverage - considers definitions and use of variables </li></ul><ul><ul><li>A variable is defined if it is assigned a value in a statement </li></ul></ul><ul><ul><li>A definition is alive if the variable is not reassigned at an intermediate statement and it is a definition clear path </li></ul></ul><ul><ul><li>Variable use P-use (as a predicate) C-use (as anything else) </li></ul></ul><ul><ul><li>Testing each possible use of a definition is all-uses coverage </li></ul></ul>
    32. 32. Requirements coverage <ul><li>Transform the requirements into a graph </li></ul><ul><ul><li>nodes denoting elementary requirements </li></ul></ul><ul><ul><li>edges denoting relations between elementary requirements </li></ul></ul><ul><li>Derive test cases </li></ul><ul><li>Use control flow coverage </li></ul>
    33. 33. Fault Seeding to estimate faults in a program <ul><li>Artificially seed faults, test to discover both seeded and new faults: </li></ul><ul><li>Total faults = ((total faults found – total seeded faults found)[ total seeded faults/total seeded faults found </li></ul><ul><li>Assumes real and seeded errors have same distribution but manually generating faults may not be realistic </li></ul><ul><li>Alternative: use two groups: real faults found by X become seeded faults for Y </li></ul><ul><li>Trust results when most faults found are seeded. </li></ul><ul><li>Many real faults found is negative. Redesign module. </li></ul><ul><li>Probability of more faults in a module is proportional to the number of errors already found! </li></ul>
    34. 34. Orthogonal Array Testing <ul><li>Intelligent selection of test cases </li></ul><ul><li>Fault model being tested is that simple interactions are a major source of defects </li></ul><ul><ul><li>Independent variables - factors and number of values they can take -- if you have four variables, each of which could have 3 values, exhaustive testing would be 81 tests (3x3x3x3) whereas OATS technique would only require 9 tests yet would test all pair-wise interactions </li></ul></ul>
    35. 35. Top-down and Bottom-up Humphrey, 1989 Test stubs are needed The extended early phases dictate a slow staff buildup Errors in critical modules at low levels are found late Test drivers and harness are needed Many modules must be integrated before a working program is available Interface errors are discovered late Disadvantages No test drivers are needed The control program plus a few modules forms a basic early prototype Interface errors are discovered early Modular features aid debugging No test stubs are needed It is easier to adjust st5ffing needs Errors in critical modules are found early Advantages The control program is tested first. Modules are integrated one at a time. Major emphasis is on interface testing Allows early testing Modules can be integrated in various clusters as desired. Major emphasis is on module functionality and performance. Major Features Top-down Bottom-up
    36. 36. Some Specialized Tests <ul><li>Testing GUIs </li></ul><ul><li>Testing with Client/Server architectures </li></ul><ul><li>Testing documentation and help facilities </li></ul><ul><li>Testing real time systems </li></ul><ul><li>Acceptance test </li></ul><ul><li>Conformance test </li></ul>
    37. 37. Software Testing Footprint Time Tests Completed Planned Rejection point Tests run successfully Poor Module Quality
    38. 38. Test Status
    39. 39. Customer Interests I N S T A L L A T I O N <ul><li>Before </li></ul><ul><li>Features </li></ul><ul><li>Price </li></ul><ul><li>Schedule </li></ul><ul><li>After </li></ul><ul><li>Reliability </li></ul><ul><li>Response Time </li></ul><ul><li>Throughput </li></ul>
    40. 40. Why bad things happen to good systems <ul><li>Customer buys </li></ul><ul><li>off-the-shelf </li></ul><ul><li>System works </li></ul><ul><li>with 40-60% </li></ul><ul><li>flow- through </li></ul><ul><li>Developers complies </li></ul><ul><li>with enhancements </li></ul>BUT <ul><li>Customer refuses </li></ul><ul><li>critical Billing </li></ul><ul><li>Module </li></ul><ul><li>Customer demands </li></ul><ul><li>33 enhancements </li></ul><ul><li>and tinkers with </li></ul><ul><li>database </li></ul><ul><li>Unintended </li></ul><ul><li>system </li></ul><ul><li>consequences </li></ul>
    41. 41. Mindset <ul><li>Move from a culture of minimal change to one of maximal change. </li></ul><ul><li>Move to &quot;make it work, make it work right, make it work better&quot; philosophy through prototyping and delaying code optimization. </li></ul><ul><li>Give the test teams the &quot;right of refusal&quot; for any code that was not reasonably tested by the developers. </li></ul>
    42. 42. Productivity <ul><li>Productivity = </li></ul><ul><li>F {people, </li></ul><ul><li>system nature, </li></ul><ul><li> customer relations, </li></ul><ul><li> capital investment} </li></ul>
    43. 43. Software Testing Summary <ul><li>Software testing Body of Knowledge very advanced (in terms of standards, literature, etc.) </li></ul><ul><li>Software testing is very expensive; statistical risk analysis must be utilized </li></ul><ul><ul><li>Cost of field faults vs. schedule slips </li></ul></ul><ul><ul><li>Release readiness criteria procedures required </li></ul></ul><ul><li>Testing techniques vary according to operational environment and application functionality </li></ul><ul><ul><li>No magic methods </li></ul></ul><ul><li>Organizational conflict of interest between development and test and project management and test </li></ul><ul><li>Involve testers throughout project </li></ul><ul><li>Hardest PM decision ship/don’t ship due to quality </li></ul>
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×