• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Software Testing Fundamentals part II
 

Software Testing Fundamentals part II

on

  • 1,745 views

 

Statistics

Views

Total Views
1,745
Views on SlideShare
1,744
Embed Views
1

Actions

Likes
2
Downloads
110
Comments
0

1 Embed 1

http://www.slideshare.net 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Software Testing Fundamentals part II Software Testing Fundamentals part II Presentation Transcript

    • T-76.613 Software Testing and Quality Assurance Lecture 2, 20.9.2005 Software Testing Fundamentals part II Juha Itkonen SoberIT HELSINKI UNIVERSITY OF TECHNOLOGY
    • Contents Testing terminology and basic concepts Realities and principles of testing Juha Itkonen, 2005 2 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Testing terminology and basic concepts Juha Itkonen, 2005 3 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Validation and verification Validation – are we building the right product? Implementation meets customer requirements, needs and expectations Verification – are we building the product right? Program conforms to its specification Juha Itkonen, 2005 4 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Functional testing Testing conducted to evaluate the compliance of a system with specified functional requirements. IEEE Standard Glossary of Software Engineering Terminology Juha Itkonen, 2005 5 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Black-box and white-Box testing Black-box (functional, behavioral, data-driven) The software under test considered as a black-box and no knowledge of the internal structure or how the software actually works is utilized in testing Testing based on inputs and respective outputs The size of the black box can vary from one class or component to a whole system White-box (structural, logic-driven) Testing based on knowing the inner structure of the system and the program logic Juha Itkonen, 2005 6 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Dynamic and static methods Dynamic methods – testing – executes code Software testing in the traditional sense Dynamic analysis methods Static methods do not execute code Reviews, inspections, static analysis Some authors name these static testing Juha Itkonen, 2005 7 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Defect Prevention Quality assurance is focused on all activities that contribute to the quality of the final product Defect prevention is essential Preventing defects to occur in the first place Development practices, conventions, techniques, and tools Many practices are used to prevent bad quality deliverables Coding conventions and standards Architectural patterns Pair programming Etc. Reviews and inspections Detect defects in current phase Prevent defects in following phases Juha Itkonen, 2005 8 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Defect Detection Earlier detected defects are cheaper to fix Prevented defects can be even cheaper Defect detection is always needed Nobody is perfect, all defects cannot be prevented Detection of some bearable amount of defects can be cheaper than prevention Detection for confidence Testing and measuring gives information of the achieved quality We try to find defects in order to convince that there are none Juha Itkonen, 2005 9 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Scripted and non-scripted testing In scripted testing test cases are pre-documented in detailed, step-by-step descriptions Different levels of scripting possible Scripts can be manual or automated Non-scripted testing is usually manual testing without detailed test case descriptions Can be disciplined, planned, and well documented exploratory testing … or ad-hoc testing Juha Itkonen, 2005 10 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • V-model of testing Acceptance Requirements testing Functional System specification testing Architecture Integration design testing st Bu Te ild Module Unit design testing Coding Juha Itkonen, 2005 11 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Implications of V-model If you want to get something done, it helps to know what do you want, first. If you want to build something complex, it helps to have a blueprint laid out in advance. It is better to finish a given task before starting work on another task that depends on the output of the first task. It is easier to think about a product in terms of a hierarchy of blueprints. It is better to test each blueprint, as well as to use each blueprint as a basis for a test process, as the end product is being built. It is much easier to find faults in small units than in large entities. Testing of large units can be carried out more easily when the smaller parts are already tested. Juha Itkonen, 2005 12 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Benefits of the V-model Intuitive and easy to explain Even to people who have never heard of software development lifecycle Matches to familiar waterfall Quite adaptable to various situations If the team is flexible and understands the inherent limitations of the model Makes a good model for training people Simple and easy to understand Shows how testing is related to other phases of the development project Beats the code-and-fix approach on any larger project More than a dozen or so people Juha Itkonen, 2005 13 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Levels, types, and phases – not a same thing Test level – a test level is a group of test activities that focus into certain level of the test target. E.g. unit test, integration test, system test, acceptance test Test levels can be seen as levels of detail and abstraction Test type – A test type is a group of test activities aimed at evaluating a system for a number of associated quality characteristics. E.g. functional test, performance test, stress test, usability test Test phase – Test phases are used to describe temporal parts in testing process that follow sequentially each other, with or without overlapping Juha Itkonen, 2005 14 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Common testing types Functional testing Non-Functional testing Function testing Performance testing Concurrency testing Load testing Installation testing Stress testing Platform testing Volume testing Smoke testing Reliability testing Configuration testing Security testing Compatibility testing Usability testing Exception testing Recoverability testing Interface testing Maintainablity testing Localization testing Documentation testing Data quality testing Conversion testing Production testing Standards testing Juha Itkonen, 2005 15 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • What is a ”bug”? Error - a human action that produces an incorrect result Fault - a manifestation of an error in software also known as a defect, bug, issue, problem, anomaly, incident, variance, inconsistency, feature, … if executed, a fault may cause a failure Failure - deviation of the software from its expected delivery or service Failure is an event; fault is a state of the software, caused by an error. Juha Itkonen, 2005 16 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Error –> Fault -> Failure A programmer makes an error… …that creates a fault in the software… …that can cause a failure in operation Juha Itkonen, 2005 17 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • What is a fault? A software fault is a mismatch between the program and its specification a fault can be detected if and only if the specification exists and is correct. A software error is present when the program does not do what its end user reasonably expects it to do (Myers, 1979) There can never be an absolute definition for bugs, nor an absolute determination of their existence. The extent to which a program has bugs is measured by the extent to which it fails to be useful. This is a fundamentally human measure. (Beizer, 1984) Juha Itkonen, 2005 18 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • More accurate definition of a failure One definition (Software Testing, Ron Patton) The software does not do something that the specification says it should do The software does something that the specification says it should not do The software does something that the specification does not mention The software does not do something that the specification does not mention but it should The software is difficult to understand, hard to use, slow, or will be viewed by the end user as just plain not right Juha Itkonen, 2005 19 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Test Oracles – How do we know it’s broken? An oracle is the principle or mechanism by which you recognize a problem Test oracle provides the expected result for a test, for example Specification document Formula Computer program Person In many cases it is very hard to find an oracle Even the customer and end user might not be able to tell which is the correct behaviour Oracle problem is one of the fundamental issues in test automation: How do we teach an automated test to recognize a defect or failure when it happens? – This is not a trivial problem. Juha Itkonen, 2005 20 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Oracle – Is this correct? Font size test Arial 22pt Font size test Arial 20pt Font size test Arial 18pt — Font size test Arial 16pt — Font size test Arial 14pt Font size test Times 22pt Font size test Times 20pt Font size test Times 18pt — Font size test Times 16pt — Font size test Times 14pt Juha Itkonen, 2005 21 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • When is an issue a fault? Consider following consistency heuristics Consistent with History: Present function behavior is consistent with past behavior. Consistent with our Image: Function behavior is consistent with an image that the organization wants to project. Consistent with Comparable Products: Function behavior is consistent with that of similar functions in comparable products. Consistent with Claims: Function behavior is consistent with what people say it’s supposed to be. Consistent with User’s Expectations: Function behavior is consistent with what we think users want. Consistent within Product: Function behavior is consistent with behavior of comparable functions or functional patterns within the product. Consistent with Purpose: Function behavior is consistent with apparent purpose. Suorce: Black Box Software Testing, Cem Kaner Juha Itkonen, 2005 22 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Severity and priority Severity of a software fault refers to the severity of the consequences of a failure caused by that fault A tester or customer is probably the best person to set the severity Priority is the fixing order of the found faults and a result of separate prioritisation activity A tester is probably not the best person to set the priority Prioritisation is typically a managerial decision business priorities customer priorities The priority of a fault other faults and features depends on other issues quality policy release strategy Prioritisation is usually based on severity Juha Itkonen, 2005 23 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Reliability Reliability: the probability that software will not cause the failure of the system for a specified time under specified conditions Can a system be fault-free? Can a software system be reliable but still have faults? Juha Itkonen, 2005 24 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Re-testing vs. regression testing Re-tests Regression tests New version of software Running a set of tests that with ”fixed fault” have been run before Re-run the same test (i.e. After software or re-test) environment changes must be exactly For confidence that repeatable everything still works same environment, For emergency fixes versions (except for the (possibly a subset) software which has been An asset (regression test intentionally changed!) suite/pack) same inputs and Standard set of tests - preconditions needs designing & If test now passes, fault maintaining has been fixed correctly – Well worth automating or has it? Juha Itkonen, 2005 25 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Regression testing Regression tests look for unexpected side-effects Fault fix introduces or (but may not find all of them) uncovers new faults Breadth tests x x Depth tests x x x Test finds fault Re-test to check fix Juha Itkonen, 2005 26 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Assessing quality based on testing You think you are here High Test quality Low Students on Software Project course claim to be here You may be here Low High Software quality Juha Itkonen, 2005 27 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Realities and principles of testing Juha Itkonen, 2005 28 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • The problem of complete testing Why not just test everything? Complete testing would require testing every possible valid input of the system and every possible invalid input and every possible valid and invalid output and every possible combination of inputs in every possible sequence of inputs and outputs and user interface errors and configuration and compatibility issues and environment related failures … There is definitely too much cases to test completely any realistic software module, not to mention a whole system Juha Itkonen, 2005 29 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Why not just ”Test Everything”? System has 20 screens Average 4 menus, 3 options / menu Average: 10 fields / screen 20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests 2 types input / field 1 second / test -> testing takes 8000 mins = 133 h = 17 working days date as Jan 3 or 3/1 10 secs / test -> 34 weeks, 1 min / test -> number as integer or 4 years, 10 min / test -> 40 years decimal (not counting finger trouble, faults or Around 100 possible values retest) Juha Itkonen, 2005 30 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Execution paths B Begin A F D G C J End H E I < 20 loops There are 51 + 52 + ... + 519 + 520 = 1014 = 100 trillion paths through the program. Testing would take approximately one billion years to try every path (if one could write, execute and verify a test case every five minutes ). 3,2 million years if one test case per second Juha Itkonen, 2005 31 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Testing is always prioritizing Time is always limited Possible ranking criteria 1. test where a failure would Use company’s be most severe significant risks to 2. test where failures would be most visible focus testing effort 3. test where failures are what to test first most likely what to test most 4. ask the customer to prioritize the How thoroughly to test requirements each feature 5. what is most critical to what not to test (at the customer’s business least for now) 6. areas changed most often 7. areas with most problems Most important tests in the past first 8. most complex areas, or technically critical Juha Itkonen, 2005 32 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Realities of testing (1/2) It is impossible to test a program completely Testing can show the presence of defects but cannot show the absence of defects You can report found defects and what you have tested and how All bugs can not be found All found bugs will not be fixed It’s really not a bug It’s too risky to fix There is more important tasks to do It’s just not worth it There is not enough time Testing does not create quality software or remove defects Juha Itkonen, 2005 33 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Realities of testing (2/2) The critique is focused on the product, not the developer Product specifications are never final You cannot wait until specifications are final There will always be changes to specifications The specifications are never complete The more bugs you find, the more bugs there are Programmers have bad days Programmers often make same mistake Some bugs are really just the tip of the iceberg The Pesticide Paradox Programs become immune to unchanging tests Software testing should be based on risks Testers are not the most popular members of a project team Juha Itkonen, 2005 34 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT
    • Testing principles When the test objective is to detect defects, the a good test case is one that has a high probability of revealing a yet undetected defect(s). Test results should be inspected meticulously. A test case must contain the expected results. Test cases should be developed for both valid and invalid input conditions. The probability of the existence of additional defects in software component is proportional to the number of defects already detected in that component. Testing should be carried out by a group that is independent of the development team. Tests must be repeatable and reusable. Testing should be planned. Testing activities should be integrated into the software life cycle. Testing is a creative and challenging task. Burnstein 2003 Juha Itkonen, 2005 35 HELSINKI UNIVERSITY OF TECHNOLOGY SoberIT/HUT