Software testing basics and its types

3,699 views
3,573 views

Published on

For informative discussions on software testing please see. http://forum.360logica.com/
Please see description below

360logica is an independent software and application testing services company which provides wide range of testing solutions to our partners based on domain, technology and business solutions including software product testing, quality assurance of software, automation in testing, finance testing, mobile software and applications testing. 360logica offers full range of software testing which include Software Product Testing, Test Automation, Performance Test Engineering, Finance Application Testing, Healthcare App Testing and SaaS Product Testing. We work closely with our partners to tailor a program of support which meets their needs and ensures our systems achieve the quality levels demanded by our partners, especially in financial testing.

Published in: Technology, Education
1 Comment
2 Likes
Statistics
Notes
No Downloads
Views
Total views
3,699
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
0
Comments
1
Likes
2
Embeds 0
No embeds

No notes for slide

Software testing basics and its types

  1. 1. Executing software in a simulated or realenvironment, using inputs selected somehow. 360 Logica
  2. 2. • Detect faults• Establish confidence in software• Evaluate properties of software • Reliability • Performance • Memory Usage • Security • Usability 360 Logica
  3. 3. Most of the software testing literature equates testcase selection to software testing but that is just onedifficult part. Other difficult issues include:• Determining whether or not outputs are correct.• Comparing resulting internal states to expected states.• Determining whether adequate testing has been done.• Determining what you can say about the software when testing is completed.• Measuring performance characteristics.• Comparing testing strategies. 360 Logica
  4. 4. We frequently accept outputs because they are plausiblerather than correct.It is difficult to determine whether outputs are correct because:•We wrote the software to compute the answer.•There is so much output that it is impossible to validate it all.•There is no (visible) output. 360 Logica
  5. 5. • Stages of Development• Source of Information for Test Case Selection 360 Logica
  6. 6. Testing in the Small• Unit Testing• Feature Testing• Integration Testing 360 Logica
  7. 7. Tests the smallest individually executable code units.Usually done by programmers. Test cases might beselected based on code, specification, intuition, etc.Tools:• Test driver/harness• Code coverage analyzer• Automatic test case generator 360 Logica
  8. 8. Tests interactions between two or more units or components. Usually done by programmers. Emphasizes interfaces.Issues:• In what order are units combined?• How do you assure the compatibility and correctness of externally-supplied components? 360 Logica
  9. 9. How are units integrated? What are the implications of this order?• Top-down => need stubs; top-level tested repeatedly.• Bottom-up => need drivers; bottom-levels tested repeatedly.• Critical units first => stubs & drivers needed; critical units tested repeatedly. 360 Logica
  10. 10. Potential Problems:• Inadequate unit testing.• Inadequate planning & organization for integration testing.• Inadequate documentation and testing of externally-supplied components. 360 Logica
  11. 11. Testing in the Large• System Testing• End-to-End Testing• Operations Readiness Testing• Beta Testing• Load Testing• Stress Testing• Performance Testing• Reliability Testing• Regression Testing 360 Logica
  12. 12. Test the functionality of the entire system.Usually done by professional testers. 360 Logica
  13. 13. • Not all problems will be found no matter how thorough or systematic the testing.• Testing resources (staff, time, tools, labs) are limited.• Specifications are frequently unclear/ambiguous and changing (and not necessarily complete and up-to-date).• Systems are almost always too large to permit test cases to be selected based on code characteristics. 360 Logica
  14. 14. • Exhaustive testing is not possible.• Testing is creative and difficult.• A major objective of testing is failure prevention.• Testing must be planned.• Testing should be done by people who are independent of the developers. 360 Logica
  15. 15. Every systematic test selection strategy can be viewed asa way of dividing the input domain into subdomains, andselecting one or more test case from each. The divisioncan be based on such things as code characteristics (whitebox), specification details (black box), domain structure,risk analysis, etc.Subdomains are not necessarily disjoint, even though thetesting literature frequently refers to them as partitions. 360 Logica
  16. 16. • Can only be used at the unit testing level, and even then it can be prohibitively expensive.• Don’t know the relationship between a “thoroughly” tested component and faults. Can generally argue that they are necessary conditions but not sufficient ones. 360 Logica
  17. 17. • Unless there is a formal specification, (which there rarely/never is) it is very difficult to assure that all parts of the specification have been used to select test cases.• Specifications are rarely kept up-to-date as the system is modified.• Even if every functionality unit of a specification has been tested, that doesn’t assure that there aren’t faults. 360 Logica
  18. 18. An operational distribution is a probability distributionthat describes how the system is used in the field. 360 Logica
  19. 19. • The input stream for this system is also the input stream for a different already-operational system.• The input stream for this system is the output stream for a different already-operational system.• Although this system is new, it is replacing an existing system which ran on a different platform.• Although this system is new, it is replacing an existing system which used a different design paradigm or different programming language.• There has never been a software system to do this task, but there has been a manual process in place. 360 Logica
  20. 20. • A form of domain-based test case selection.• Uses historical usage data to select test cases.• Assures that the testing reflects how it will be used in the field and therefore uncovers the faults that users are likely to see. 360 Logica
  21. 21. • Can be difficult and expensive to collect necessary data.• Not suitable if the usage distribution is uniform (which it never is).• Does not take consequence of failure into consideration. 360 Logica
  22. 22. • Really does provide a user-centric view of the system.• Allows you to say concretely what is known about the system’s behavior based on testing.• Have metric that is meaningfully related to the system’s dependability. 360 Logica
  23. 23. Look at characteristics of the input domain or subdomains.• Consider typical, boundary, & near-boundary cases (these can sometimes be automatically generated).• This sort of boundary analysis may be meaningless for non- numeric inputs. What are the boundaries of {Rome, Paris, London, … }?• Can also apply similar analysis to output values, producing output-based test cases. 360 Logica
  24. 24. US Income Tax System;If income is Tax is$0 - 20K 15% of total income$20 -50K $3K + 25% of amount over $20KAbove $50K $10.5K + 40% of amount over $50KBoundary cases for inputs: $0, $20K, $50K 360 Logica
  25. 25. Random testing involves selecting test cases basedon a probability distribution. It is NOT the same asad hoc testing. Typical distributions are: • uniform: test cases are chosen with equal probability from the entire input domain. • operational: test cases are drawn from a distribution defined by carefully collected historical usage data. 360 Logica
  26. 26. • If the domain is well-structured, automatic generation can be used, allowing many more test cases to be run than if tests are manually generated.• If an operational distribution is used, then it should approximate user behavior. 360 Logica
  27. 27. • An oracle (a mechanism for determining whether the output is correct) is required to determine whether the output is correct.• Need a well-structured domain.• Even a uniform distribution may be difficult or impossible to produce for complex domains, or when there is a non-numeric domains.• If a uniform distribution is used, only a negligible fraction of the domain can be tested in most cases.• Without an operational distribution, random testing does not approximate user behavior, and therefore does not provide an accurate picture of the way the system will behave. 360 Logica
  28. 28. Risk is the expected loss attributable to the failurescaused by faults remaining in the software.Risk is based on• Failure likelihood or likelihood of occurrence.• Failure consequence.So risk-based testing involves selecting test casesin order to minimize risk by making sure that the mostlikely inputs and highest consequence ones are selected. 360 Logica
  29. 29. Example: ATM MachineFunctions: Withdraw cash, transfer money, read balance, make payment, buy train ticket.Attributes: Security, ease of use, availability 360 Logica
  30. 30. Features & Occurrence Failure Priority Attributes Likelihood Consequence (L x C)Withdraw cash High = 3 High = 3 9Transfer money Medium = 2 Medium = 2 4Read balance Low = 1 Low = 1 1Make payment Low = 1 High = 3 3Buy train ticket High = 3 Low = 1 3Security Medium = 2 High = 3 6 360 Logica
  31. 31. Features & Occurrence Failure Priority Attributes Likelihood Consequence (L x C)Withdraw cash High = 3 High = 3 9Security Medium = 2 High = 3 6Transfer money Medium = 2 Medium = 2 4Make payment Low = 1 High = 3 3Buy train ticket High = 3 Low 1 3Read balance Low = 1 Low = 1 1 360 Logica
  32. 32. The end user runs the system in their environment toevaluate whether the system meets their criteria.The outcome determines whether the customer willaccept system. This is often part of a contractualagreement. 360 Logica
  33. 33. Test modified versions of a previously validatedsystem. Usually done by testers. The goal is toassure that changes to the system have notintroduced errors (caused the system to regress).The primary issue is how to choose an effectiveregression test suite from existing, previously-runtest cases. 360 Logica
  34. 34. Once a test suite has been selected, it is oftendesirable to prioritize test cases based on somecriterion. That way, since the time available fortesting is limited and therefore all tests can’t berun, at least the “most important” ones can be. 360 Logica
  35. 35. • Most frequently executed inputs.• Most critical functions.• Most critical individual inputs.• (Additional) statement or branch coverage.• (Additional) Function coverage.• Fault-exposing potential. 360 Logica
  36. 36. Methods based on the internal structure of code:• Statement coverage• Branch coverage• Path coverage• Data-flow coverage 360 Logica
  37. 37. White-box methods can be used for• Test case selection or generation.• Test case adequacy assessment.In practice, the most common use of white-box methods is as adequacy criteria after tests have been generated by some other method. 360 Logica
  38. 38. Statement, branch, and path coverage are examples of control flow criteria. They rely solely on syntactic characteristics of the program (ignoring the semantics of the program computation.)The data flow criteria require the execution of path segments that connect parts of the code that are intimately connected by the flow of data. 360 Logica
  39. 39. • Is code coverage an effective means of detecting faults?• How much coverage is enough?• Is one coverage criterion better than another?• Does increasing coverage necessarily lead to higher fault detection?• Are coverage criteria more effective than random test case selection? 360 Logica
  40. 40. • Test execution: Run large numbers of test cases/suites without human intervention.• Test generation: Produce test cases by processing the specification, code, or model.• Test management: Log test cases & results; map tests to requirements & functionality; track test progress & completeness 360 Logica
  41. 41. • More testing can be accomplished in less time.• Testing is repetitive, tedious, and error-prone.• Test cases are valuable - once they are created, they can and should be used again, particularly during regression testing. 360 Logica
  42. 42. • Does the payoff from test automation justify the expense and effort of automation?• Learning to use an automation tool can be difficult.• Tests, have a finite lifetime.• Completely automated execution implies putting the system into the proper state, supplying the inputs, running the test case, collecting the results, and verifying the results. 360 Logica
  43. 43. • Automated tests are more expensive to create and maintain (estimates of 3-30 times).• Automated tests can lose relevancy, particularly when the system under test changes.• Use of tools require that testers learn how to use them, cope with their problems, and understand what they can and can’t do. 360 Logica
  44. 44. • Load/stress tests -Very difficult to have very large numbers of human testers simultaneously accessing a system.• Regression test suites -Tests maintained from previous releases; run to check that changes haven’t caused faults.• Sanity tests - Run after every new system build to check for obvious problems.• Stability tests - Run the system for 24 hours to see that it can stay up. 360 Logica
  45. 45. NIST estimates that billions of dollars could be saved each year if improvements were made to the testing process.*NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing, 2002. 360 Logica
  46. 46. Cost of Inadequate Potential Cost Software Testing Reduction from Feasible ImprovementsTransportation $1,800,000,000 $589,000,000ManufactureFinancial Services $3,340,000,000 $1,510,000,000Total U.S. Economy $59 billion $22 billion*NIST Report: The Economic Impact of Inadequate Infrastructurefor Software Testing, 2002. 360 Logica

×