Testing fundamentals


Published on

Complete details of the testing module,, Black box and White box testing details are very nice

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Testing fundamentals

  1. 1. TESTING Process Improvement
  2. 2. Objectives • To introduce Testing and its importance. • To explain the difference b/w testing and debugging. • To explain the methodologies of Testing. • To explain different levels of testing. • To explain the steps involved in testing life cycle. 2
  3. 3. Introduction • Testing is the process of validating and verifying whether the software product – is reached all its specifications or not – is giving the expected results from the set of possible inputs or not. • Validation : are we building the right product • Verification: are we building the product right 3
  4. 4. Introduction • Software testing can be stated as the process of validating and verifying that a software program/application/product: – does not contain defects; – does not behave in any undesirable way; – meets the requirements that guided its design and development; – works as expected; and – can be implemented with the same characteristics. • A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. • Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.
  5. 5. Testing vs Debugging • A software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or can uses it to behave in unintended ways. • Testing is identifying the bug and is done by tester. • Debugging is removing the bug and is done by developer. • Debugging is the consequence of successful testing. Debugging starts with the results from the execution of test cases. • Testing is nothing but finding an error/bug and its done by testers, where as debugging is nothing but finding the root cause for the error/bug and resolving them, that is taken care by developers. 5
  6. 6. Introduction • The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. • In the current culture of software development, a testing organization may be separate from the development team.
  7. 7. Difference between defect , error , bug , failure and fault • Error : A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. • Failure: The inability of a system or component to perform its required functions within specified performance requirements. • Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. • Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. • Defect: Mismatch between the requirements. 7
  8. 8. Functional vs non-functional testing • Functional testing refers to activities that verify a specific action or function of the code. • These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. • Functional tests tend to answer the question of "can the user do this" or "does this particular feature work". • Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance behavior under certain constraints , or security. • Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
  9. 9. Software verification and validation • Verification: Have we built the software right? (i.e., does it match the specification). • Validation: Have we built the right software? (i.e., is this what the customer wants). – Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. – Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.
  10. 10. Testing methods • Software testing methods are traditionally divided into • white-box • black-box testing. • These two approaches are used to describe the point of view that a test engineer takes when designing test cases. • White box testing • White box testing is when the tester has access to the internal data structures and algorithms including the code that implement these. • Black box testing • Black box testing treats the software as a "black box"—without any knowledge of internal implementation.
  11. 11. Types of white box testing • The following types of white box testing exist: • API testing (application programming interface) - testing of the application using public and private APIs • Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once) • Fault injection methods - improving the coverage of a test by introducing faults to test code paths
  12. 12. White Box Testing • Test coverage: • White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. • This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. • Two common forms of code coverage are: • Function coverage, which reports on functions executed • Statement coverage, which reports on the number of lines executed to complete the test • They both return a code coverage metric, measured as a percentage
  13. 13. Types of Black box testing • Black box testing methods include: • equivalence partitioning • boundary value analysis
  14. 14. Black Box Testing • Advantages and disadvantages: • The black box tester has no "bonds" with the code, and a tester's perception is very simple: a code must have bugs. • Using the principle, "Ask and you shall receive," black box testers find bugs where programmers do not. • The tester doesn't know how the software being tested was actually constructed. • As a result, there are situations when (1) a tester writes many test cases to check something that could have been tested by only one test case, and/or (2) some parts of the back-end are not tested at all. • Therefore, black box testing has the advantage of "an unaffiliated opinion", on the one hand, and the disadvantage of "blind exploring", on the other.
  15. 15. Gray box testing • Gray box testing involves having knowledge of internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. • Manipulating input data and formatting output do not qualify as grey box, because the input and output are clearly outside of the "black-box" that we are calling the system under test. • This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. • However, modifying a data repository does qualify as grey box, as the user would not normally be able to change the data outside of the system under test. • Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages.
  16. 16. Testing levels • Unit testing • Integration testing • Module level Testing • System testing • System integration testing
  17. 17. Unit Testing • Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. • In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors. • Unit testing is also called component testing.
  18. 18. Integration and Module Testing • Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together . • Normally the former is considered a better practice since it allows interface issues to be localized more quickly and fixed. • Module testing refers to tests that verify the functionality of a specific module usually at the function level.
  19. 19. System and System Integration testing • System testing tests a completely integrated system to verify that it meets its requirements • System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. • System integration testing verifies that a system is integrated to any external or third-party systems defined in the system requirements. • System integration testing is the process of verifying the synchronization between two or more software systems and which can be performed after software system collaboration is completed.
  20. 20. A sample testing cycle • Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed. • Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software. • Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team. • Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. • Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing. • Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
  21. 21. A sample testing cycle • Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects. • Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be treated, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later. • Test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process. This includes the testing objective, methods of testing new functions, total time and resources required for the project, and the testing environment.
  22. 22. A sample testing cycle • A test plan is a document detailing a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be. • A test bed is a platform for experimentation of large development projects. • A typical testbed could include software, hardware, and networking components] • The term is used across many disciplines to describe a development environment that is shielded from the hazards of testing in a live or production environment.
  23. 23. IEEE Test Plan Structure • Test plan identifier • Introduction • Test items • Features to be tested • Features not to be tested • Approach • Item pass/fail criteria • Suspension criteria and resumption requirements • Test deliverables • Testing tasks • Environmental needs • Responsibilities • Staffing and training needs • Schedule • Risks and contingencies • Approvals
  24. 24. Different ways of Testing • A test case in software engineering is a set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not • Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use most of all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases. • Automated testing is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. • Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.
  25. 25. Key Points • Verification and validation are the basics of Testing • Testing must be done properly to reach the specifications given. • Different Levels of testing are to be performed as per the methodologies . • Test plan is very important to start with the testing. 25