Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Unit 1 basic concepts of testing & quality


Published on

Unit 1 of Subject System Testing and Quality contains Quality Revolution, Software Quality, fault, error, defect, failure, role of testing, static analysis, dynamic analysis, objective of testing, test case, test activities

Published in: Engineering
  • Be the first to comment

Unit 1 basic concepts of testing & quality

  1. 1. BASIC CONCEPT OF TESTING AND QUALITY UNIT - 1 1 Prepared By: Ravi J Khimani, CSE Department, SLTIET, Rajkot
  2. 2. QUALITY REVOLUTION  Quality – critical issue in product development  In the past, near about 1940, Quality Revolution began.  Due to – Global Competition, OutSourcing, Off- Shoring and increasing customer expectation.  Developing quality product in tighter schedule is big issue for companies. 2
  3. 3. QUALITY REVOLUTION  In old approach, Efforts to improve quality is centered around the end of product development.  In new approach, improve quality encompasses all phases: from Requirement Analysis to Final Delivery of Product. 3
  4. 4. QUALITY REVOLUTION  An effective quality process must focus on:  Paying attention to customer expectation  Making efforts to continuously improve quality.  Integrate measurement with design and development.  Develop system level perspectives with concentrating on methodology and process.  Eliminate waste through continuous improvement 4
  5. 5. QUALITY REVOLUTION  Quality movement started around 1940-1950  William Edward Deming, gives literature on Statistical Quality Control (STQ)  STQ is discipline based on measurements and statistics.  In which, decisions are made and plans developed on the collection and evaluation of facts and data. 5
  6. 6. QUALITY REVOLUTION  Deming gave Shewhart Method for STQ.  In which, he gave Plan – Do – Check – Act (PDCA) cycle, also known as Shewhart Cycle. 6
  7. 7. QUALITY REVOLUTION  Between 1950 – 1970, companies came up with innovative principle, known as “Lean Principle”.  Which is, “A systematic approach to identify and eliminate waste through continuous improvement, flowing the product at the pull of the customer expectations.”  For Example, to compress the time period in banking payments. 7
  8. 8. QUALITY REVOLUTION  In the era of 1950, Joshep M Juran of US proposed raising level of Quality Management from Manufacture Dept. to overall organization.  That Quality Management is known as Total Quality Control (TQC).  It includes companywide activities, audits, quality circle and promotions of quality 8
  9. 9. QUALITY REVOLUTION  That TQC emphases on,  Quality comes first not short-term profits.  Customer comes first not the producers.  Decisions are based on Data and Facts.  Management is participatory and respectful of all employees  Management is driven by cross-functional committees covering product planning, design, sales, marketing, manufacturing, purchasing, distribution. 9
  10. 10. QUALITY REVOLUTION  One of the TQC method developed by Japan, is known as Ishikawa OR cause-and-effect method.  It says, product quality comes from four causes, materials, 10
  11. 11. QUALITY REVOLUTION  Traditionally, the TQC and Lean Concept are applied to Manufacturing Department.  The software development process uses these concepts through “Software Capability Maturity Model (CMM)” model, to guide the production of qualitative software.  Which provides framework to discuss software production issues. 11
  12. 12. SOFTWARE QUALITY  What is Software Quality?  Quality is a complex concept – means different things to different people.  Kitchenham and Pfleeger discuss software quality in five different ways:  Transcendental View:  It envisages quality as something that can be recognized but is difficult to define.  It can be also applied to complex area of regular life.  Not specific to only Software Quality. 12
  13. 13. SOFTWARE QUALITY  User View:  It perceives quality as fitness for purpose.  Key question: “Does product satisfy user needs?”  Manufacturing View:  Here quality level is determined by considering product’s specification.  Product View:  Quality is viewed as inherent characteristics. Like Internal Characteristics, external characteristics. 13
  14. 14. SOFTWARE QUALITY  Value Based View:  Depends on amount a customer is willing to pay for it.  In mid-1970, McCall, Richards and Walters were gave the software quality in two terms,  Quality Factors  Quality Criteria 14
  15. 15. SOFTWARE QUALITY  A quality Factor  represents behavioral characteristics of system.  For example, Correctness, Reliability, Efficiency, Testability, Maintainability, Modularity and Reusability.  A Quality Criteria  Is an attribute of Quality Factor that can be related to Software Development. 15
  16. 16. SOFTWARE QUALITY  There are various software quality models have been proposed for Software Quality and its related attributes  Two most popular models are  CMM [Software Capability Maturity Model]  ISO [International Standard Organization]  In the field of Testing two popular models are available,  TPI [Test Process Improvement]  TMM [Test Maturity Model] 16
  17. 17. FAILURE, FAULT, ERROR AND DEFECT  In software testing, we can find references to these four terms.  In fault – tolerant computing community,  Failure: A failure is said to occur whenever the external behaviour of a system does not confirm to that prescribed in the system specification.  Error: An error is a state of the system. In the absence of any corrective action by the system, an error state could lead to a failure which would not be attributed to any event subsequent to the error. 17
  18. 18. FAILURE, FAULT, ERROR AND DEFECT  Fault: A fault is the adjudged (true) cause of an error.  A fault may remain undetected for a long time, until some event activates it.  When an event activates a fault,  it brings the program into an intermediate error state.  If computation is allowed to proceed from an error 18
  19. 19. FAILURE, FAULT, ERROR AND DEFECT  in fault-tolerant computing, corrective actions can be taken to take a program out of an error state into a desirable state such that subsequent computation does not eventually lead to a failure.  The process of failure manifestation can therefore be succinctly represented as a behaviour chain as follows: 19
  20. 20. FAILURE, FAULT, ERROR AND DEFECT  In software, a system may be defective due to design issues; certain system states will expose a defect, resulting in the development of faults are defined as incorrect signal values or decisions within the system.  In practical purpose, both term, Defect and Fault are synonymous terms. 20
  21. 21. FAILURE, FAULT, ERROR AND DEFECT  For Example, Consider a small organization. Defects in the organization’s staff promotion policies can cause improper promotions, viewed as faults. The resulting ineptitudes & dissatisfactions are errors in the organization’s state. The organization’s personnel or departments probably begin to malfunction as result of the errors, in turn causing an overall degradation of performance. The end result can be the organization’s failure to achieve its goal. 21
  22. 22. ROLE OF TESTING  Testing plays an important role in achieving and assessing the quality of a software product .  we can improve the quality of the products as we repeat a test - find defects – fix cycle during development.  But how can we perform system level tests before releasing product? 22
  23. 23. ROLE OF TESTING  Friedman and Voas say, “software testing is a verification process for software quality assessment and improvement.”  Software quality assessment is divided into two categories,  Static Analysis  Dynamic Analysis 23
  24. 24. 1. STATIC ANALYSIS [ROLE OF TESTING]  It is based on the examination of a number of documents,  requirements documents,  software models,  design documents  source code.  It also includes code review, inspection, walk-through, algorithm analysis, and proof of correctness.  It does not involve actual execution of the code under development.  Instead, it examines code and reasons over all possible behaviours that might arise during run time.  For Ex., Compiler optimizations are standard static analysis.24
  25. 25. 2. DYNAMIC ANALYSIS [ROLE OF TESTING]  It involves actual program execution with input values.  During execution, it observes the behavioural and performance properties of the program.  For practical considerations, a finite subset of the input set can be selected.  Therefore, in testing, we observe program behaviours and reach a conclusion about the quality of the system.  Careful selection of a finite test set is crucial to reaching a reliable conclusion. 25
  26. 26. ROLE OF TESTING [CONT…]  Both Analysis techniques are complementary in nature, and for better effectiveness, both must be performed repeatedly and alternated.  Practitioners and researchers need to remove the boundaries between static and dynamic analysis and create a hybrid analysis that combines the strengths of both approaches 26
  27. 27. OBJECTIVES OF TESTING  The stakeholders in a test process are the programmers, the test engineers, the project managers, and the customers.  A stakeholder is a person or an organization who influences a system’s behaviours or who is impacted by that system.  Different stakeholders view a test process from different perspectives, as follows, 27
  28. 28. OBJECTIVES OF TESTING  It does work:  While implementing a program unit, the programmer may want to test whether or not the unit works in normal circumstances.  The programmer gets much confidence if the unit works to his or her satisfaction.  The same idea applies to an entire system as well—once a system has been integrated, the developers may want to test whether or not the system performs the basic functions.  Here, for the psychological reason, the objective of testing is to show that the system works, rather than it does not work. 28
  29. 29. OBJECTIVES OF TESTING  It does not work:  Once the programmer (or the development team) is satisfied that a unit (or the system) works to a certain degree, more tests are conducted with the objective of finding faults in the unit (or the system).  Here, the idea is to try to make the unit (or the system) fail.  Reduce the risk of failure:  Software systems contain faults, which cause the system to fail.  “failing from time to time” gives rise of failure rate.  As faults are discovered and fixed during more tests,  Thus, a higher level objective of performing tests is to bring29
  30. 30. OBJECTIVES OF TESTING  Reduce the cost of testing:  The different kinds of costs associated with a test process include  the cost of designing, maintaining, and executing test cases,  the cost of analyzing the result of executing each test case,  the cost of documenting the test cases,  and the cost of actually executing the system and documenting it.  Therefore, the less the number of test cases designed, the less will be the associated cost of testing.  However, producing a small number of arbitrary test cases is not a good way of saving cost.  Objective – more test cases – more reliable software.  Its define effectiveness of test cases. 30
  31. 31. TEST CASE  Basic Form, it’s a pair of, <input, expected outcome>  In stateless system, outcome depends only on the current input.  For example, calculate square root of number,  A compiler for C language. 31
  32. 32. TEST CASE  In state-oriented system, outcome depends on current input as well as current state of system.  For this test case consists of Sequence of <input, expected outcome> pairs, according to state.  It also consists of Decision and Timing factor.  For example, ATM Machine, 32
  33. 33. EXPECTED OUTCOME  Outcome of program execution is complex entity, which includes,  Value produced  State change (program or database)  Set of values interpreted together for valid outcome.  ORACLE: is important concept of Test Design.  Oracle is entity – Program, Process, Human Expert or body of data – that tells us the expected outcome of test. 33
  34. 34. EXPECTED OUTCOME  Expected Outcome should be computed while designing test cases.  Before program execution and with selected inputs.  Will eliminate implementation biasing.  The process is of – execute program – observe actual output – verify actual output – use that verified output in subsequent test 34
  35. 35. CONCEPT OF COMPLETE TESTING  It means “there are no undiscoverable faults at the end of test phase”.  It’s near impossible, due to  Input domain is to large  Program may have large number of state based on valid and invalid inputs.  Timing constraint  Design (for program) issues too complex  Impossible to create all possible execution environments like weather, temperature, altitude, pressure, etc…  “Input value which is valid but is not property timed is called inopportune Input”. 35
  36. 36. CENTRAL ISSUE IN TESTING  Impossible to test complete input domain to discover faults.  Select subset of Domain is best thing to test program. 36
  37. 37. TEST ACTIVITIES 37
  38. 38. TEST ACTIVITIES  Identify objective to be tested  Clear purpose must be associated with every test case.  Select Inputs  Based on specification, source code and expectation  Compute the expected outcome  Can be done from high-level understanding of the38
  39. 39. TEST ACTIVITIES  Set up execution environment of program  Prepare right execution environment  For ex, Initialize local system, external to program like making network connection, right database system available.  For ex, Initialize any remote, external system like distributed system to run client code.  Execute the Program  With selected input and observe actual output.  Test Coordination concept is used in synchronizing different components of Test Cases, like different physical location. 39
  40. 40. TEST ACTIVITIES  Analyze Test Result  Comparison should be precise.  After analyzing, Test Verdicts should be assigned, like Pass, Fail, inconclusive.  Test Report must be written, contains reasons of failure of test cases analysis of failure A indication to actual outcome, test case, input values tested, environment of 40
  41. 41. VERIFICATION AND VALIDATION  Two similar concept  Verification  Determine that product of given development phase satisfies the requirements established before starting of that phase.  Like requirement specification, design specification, code, user manual.  “Activities that check the correctness of a development phase are called Verification 41
  42. 42. VERIFICATION AND VALIDATION  Validation  Activities that determine and confirm that product meets its desired use.  Aim at confirming that product meets user’s expectations.  Focus on final product.  Should be executed at early stage of development cycle, not at the end of development cycle. 42
  43. 43. VERIFICATION VS. VALIDATION Verification Validation Confirming that Building product correctly Confirming that building correct product Review interim works like specification, design specification, code, user manual. Is performed at the end of development phase of each part of system. Verification considers quality attributes like consistency, correctness, completeness. Validation considers only correctness and satisfaction. Can be applied through Statistic Analysis Techniques like inspection, walkthrough, reviews, checklists, Can be applied by running system in its actual environment using verity of Tests. 43
  44. 44. CORRECTNESS  Correctness from software engineering perspective can be defined as the stickiness to the specifications that determine how users can interact with the software and how the software should be have when it is used correctly.  If the software behaves incorrectly, it might take considerable amount of time to achieve.44
  45. 45. CORRECTNESS  Below are some of the important rules for effective programming which are consequences of the program correctness theory.  Defining the problem completely.  Develop the algorithm and then the program logic.  Reuse the proved models as much as possible.  Prove the correctness of algorithms during the design phase.  Developers should pay attention to the clarity and simplicity of your program.  Verifying each part of a program as soon as it is developed.45
  46. 46. SOURCES OF INFORMATION FOR TEST CASES  A software development process generates a large body of information, such as requirements specification, design document, and source code.  In order to generate effective tests at a lower cost, test designers analyze the following sources of information:  Requirements and functional specifications.  Source code  Input and output domains  Operational Profiles  Fault model [Error Guessing, Fault Seeding, Mutation 46
  47. 47. TEST LEVELS  Performed at different levels involving complete system.  Software goes through four stages: Unit, Integration, System and Acceptance level testing.  First three inside developing organization  Last one is outside developing organization  These four stages of testing is known as V- 47
  48. 48. 48
  49. 49. CONTROL FLOW GRAPH  It’s a graphical representation of program unit.  Three symbols are used. 49
  50. 50. CONTROL FLOW GRAPH  A rectangle represents sequential computation.  A diamond box represents decision point.  Two branches one decision box has, True or False.  A small circle represents a merge point.  We are not labeling merge point. 50
  51. 51. TESTING TYPES  Black Box Testing  Black-box testing is a method of software testing that examines the functionality of an application based on the specifications. It is also known as Specifications based testing.  Independent Testing Team usually performs this type of testing during the software testing life cycle.  This method of test can be applied to each and every level of software testing.  There are different techniques involved in Black Box testing,  like Equivalence Class, Boundary Value Analysis, Domain Tests, Orthogonal Arrays, Decision Tables, State Models, All-51
  52. 52. TESTING TYPES  White Box Testing  White box testing is a testing technique, that examines the program structure and derives test data from the program logic/code. The other names of white box testing are clear box testing, open box testing, logic driven testing or path driven testing or structural testing.  White Box Testing Techniques:  Statement Coverage - This technique is aimed at exercising all programming statements with minimal tests.  Branch Coverage - This technique is running a series of tests to ensure that all branches are tested at least once.  Path Coverage - This technique corresponds to testing all possible paths which means that each statement and branch is covered. 52
  53. 53. TESTING TYPES  Model Based Testing  Model-based testing is a software testing technique in which the test cases are derived from a model that describes the functional aspects of the system under test.  It makes use of a model to generate tests that includes both offline and online testing.  Importance  Unit testing wont be sufficient to check the functionalities  To ensure that the system is behaving in the same sequence of actions.  Model-based testing technique has been adopted as an integrated part of the testing process.  Commercial tools are developed to support model-based testing. 53
  54. 54. TESTING TYPES  Interface Testing  Interface Testing is performed to evaluate whether systems or components pass data and control correctly to one another. It is to verify if all the interactions between these modules are working properly and errors are handled properly.  Verify that communication between the systems are done correctly  Verify if all supported hardware/software has been tested  Verify if all linked documents be supported/opened on all platforms  Verify the security requirements or encryption while communication happens between systems  Check if a Solution can handle network failures between Web site and application serve 54
  55. 55. TESTING TYPES  Unit Testing  Tests individual program units like function, class  After surety of units working properly, modules are assembled to larger subsystem.  By Software Developer.  Integration Testing  Performed on assembled units of system in unit testing.  By Software Developer and Integration test 55
  56. 56. TESTING TYPES  System Testing  Object is to construct a reasonably stable system that can withstand to System-Level Testing.  System level testing includes functionality testing, security testing, reliability testing, stability, stress, performance, load testing.  Aim to discover most of the faults and verify fixes are working.  System testing composed of create test plan, design test suit, prepare test environments, executing tests. 56
  57. 57. TESTING TYPES  Regression Testing  Perform throughout life cycle of system development.  Performed when component of system is modified.  Idea to verify that modification does not affect existing work and does not introduce new faults.  It is distinct level of testing  Sub-phase of unit, integration and system-level testing. 57
  58. 58. TESTING TYPES  Acceptance Testing  After completion of all above testing, product is delivered to customer.  Customer performs their own series of tests, called acceptance tests.  Objective of this testing to measure Quality of Product.  Key notion is customer’s expectations from the system. 58
  59. 59. TESTING TYPES  Two kinds of Acceptance Testing  User Acceptance Testing (UAT): done by user to ensure system performs contractual acceptance criteria.  Business Acceptance Testing (BAT): done by supplier’s development organization, to ensure that system will pass UAT. 59