Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

System testing


Published on

This is all about System testing..

Published in: Engineering
  • D0WNL0AD FULL ▶ ▶ ▶ ▶ ◀ ◀ ◀ ◀
    Are you sure you want to  Yes  No
    Your message goes here

System testing

  1. 1. System Testing
  2. 2. What is system testing ???  System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.  System testing is carried out by specialists testers or independent testers.  System testing should investigate both functional and non-functional requirements of the testing.
  3. 3. What is Functional requirement?  Business Rules  Transaction corrections, adjustments and cancellations  Administrative functions  Authentication  Authorization levels  External Interfaces  Certification Requirements  Reporting Requirements  Legal or Regulatory Requirements
  4. 4. Non-Functional requirement  Performance  Scalability  Capacity  Availability  Reliability  Recoverability  Maintainability  Serviceability  Security  Regulatory  Manageability  Environmental  Data Integrity  Usability  Interoperability
  5. 5. What is the Functional requirement of this milk carton??? What is the Non- Functional requirement of this hard hat???
  6. 6. Types of System testing??  Black box testing  White box testing  Gray box testing
  7. 7. Black box testing  A method of software testing that verifies the functionality of an application without having specific knowledge of the application’s code/internal structure.  It is also known as functional testing.  Tests are based on requirements and functionality.
  8. 8. White box testing….  It is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality  also known as clear box testing, glass box testing, transparent box testing, and structural testing
  9. 9. Gray testing …..  It is a combination of white-box testing and black-box testing.  The aim of this testing is to search for the defects if any due to improper structure or improper usage of applications.  For a complete software examination, both white box and black box tests are required.
  10. 10. Difference between white and black box testing…… Black- box testing White-box testing 1) Here the internal structure/ design/ implementation of the item being tested is NOT known to the tester. 1) Here the internal structure/ design/ implementation of the item being tested is known to the tester. 2) Mainly applicable to higher levels of testing. 2) Mainly applicable to lower levels of testing. 3)Programming knowledge is not required. 3)Programming knowledge is required. 4)Implementation knowledge is not required. 4)Implementation knowledge is required. 5) This testing is done by Independent Software Testers 5) Black box testing is done by Software Developers
  11. 11. Activity network for system testing  Active networking is a communication pattern that allows packets flowing through a telecommunications network to dynamically modify the operation of the network.
  12. 12. Activity network for system testing  A test plan entails the following activities.  1. Prepare test plan.  2. Specify conditions for user acceptance testing.  3. Prepare test data for program testing.  4. Prepare test data for transaction path testing.  5. Plan user training.  6. Compile/assemble programs.  7. Prepare job performance aids.  8. Prepare operational documents.
  13. 13. Design specification for test plan  Test Design Specification The test design is the first stage in developing the tests for software testing projects. It records what needs to be tested, and is derived from the documents that come into the testing stage, such as requirements and designs. It records which features of a test item are to be tested, and how a successful test of these features would be recognized.
  14. 14. Design specification for test plan  Test Design Specification Identifier • Unique “short” name for the case • Version date and version number of the case • Version Author and contact information • Revision history
  15. 15. Design specification for test plan Features to be Tested  • Features • Attributes and Characteristics • Groupings of features • Level of testing appropriate to test item if this test design specification covers more that one level of testing • Reference to the original documentation where this test objective (feature) was obtained.
  16. 16. Design specification for test plan  Approach Refinements  • Selection of specific test techniques • Reasons for technique selection • Method(s) for results analysis • Tools etc. • Relationship of the test items/features to the levels of testing • Summarize any common information that may relate to multiple test cases or procedures. Centralizing common information reduces document size, and simplifies maintenance. • Shared Environment • Common setup/recovery • Case dependencies
  17. 17. Design specification for test plan  Test Identification • Identification of each test case with a short description of the case, it’s test level or any other appropriate information required to describe the test relationship. • Identification of each test procedure with a short description of the case, it’s test level or any other appropriate information required to describe the test relationship. 
  18. 18. Design specification for test plan  Feature Pass/Fail Criteria Describe the criteria for assessing the feature or set of features and whether the test(s) were successful of not.
  19. 19. Performance Criteria performance characteristics.  Performance Acceptance Criteria  Performance Testing Objectives  Performance Objectives  Performance Requirements  Performance Goal  Performance Targets  Performance Thresholds  Performance Budgets or Allocations
  20. 20. Performance Criteria Testing types  Load testing  Stress testing  Soak testing  Spike testing  Configuration testing  Isolation testing
  21. 21. Performance Criteria Sources of Performance Acceptance Criteria When identifying performance acceptance criteria, consider the following:  Business requirements  User expectations  Contractual obligations  Regulatory compliance criteria and industry standards  Service level agreements  Resource utilization targets  Various and diverse, realistic workload models  The entire range of anticipated load conditions  Conditions of system stress  Entire scenarios and component activities  Key performance indicators  Previous releases of the application  Competing applications  Optimization objectives  Safety factors, headroom and scalability  Schedule, staffing, budget, resources and other priorities
  22. 22. factors that determine system quality Flexibility  Flexibility is the ability of software to add/modify/remove functionality without damaging current system. Extensibility  Like subset of flexibility.  one of the most important properties of quality system. Maintainability  Effort required to fix and test the error. The more correct and useful documentation exists, the more maintainability can be performed.
  23. 23. factors that determine system quality Performance and Efficiency  Performance is mostly about response time of the system.  It is assessed by processing speed, response time, resource utilization, throughput and efficiency.
  24. 24. factors that determine system quality Scalability  A scalable system responds user actions in an acceptable amount of time, even if load increases.  There are two types of scalabilities vertical scalability :more hardware may be added for handling increasing user transaction, but the architecture should not change while doing this. horizontal scalability: Ability of running on multiple, increasing count of machines is multiple processing.
  25. 25. factors that determine system quality Availability, Robustness, Fault Tolerance and Reliability:  A robust software should not lose its availability even in most failure states. Usability and Accessability :  User interfaces are the only visible parts of software according to the viewpoint of user.” The ease with which a user is able to navigate to the system. Platform Compatibility and Portability  A quality software should run on as much various platforms as it can.  Portability is the ease with which a software can be migrated from one environment to the other.
  26. 26. factors that determine system quality Testability and Managability:  Effort required to test a program so that it performs its intended function. Creating a successful logging system is another very important issue about managability. Security:  Security is a very important issue on any system design. This may include authorization and authentication techniques, network attack protections, data encryption and
  27. 27. factors that determine system quality Functionality and Correctness:  Degree to which software satisfies stated needs. Functionality (or correctness) is the conformity of the software with actual requirements and specifications. Correctness is the extent to which a program satisfies its specifications.
  28. 28. Example of different types of system testing……  Expected outputs
  29. 29. Example of different types of system testing…… Input Testing  On the basis of expected outputs we have to input the data to the system. It should be check that our inputs are correct. If our inputs are wrong or any bug in them we will not be able to get the expected outputs
  30. 30. Example of different types of system testing……. Installation Testing  Every software or any system to run we need to install it or setup it for doing any work. If we have any problem to in install part then our system will not work properly even we can’t access the software. So this part is very important. And we have to check that is there any problem in installation or not.
  31. 31. Example of different types of system testing…….. Graphical User Interface Testing  To generate a set of test cases, test designers attempt to cover all the functionality of the system and fully exercise the GUI itself. The difficulty in accomplishing this task is twofold: to deal with domain size and with sequences. In addition, the tester faces more difficulty when they have to do regression testing.
  32. 32. Example of different types of system testing……….. Regression Testing  Regression testing is a type of software testing that verifies that software that was previously developed and tested still performs correctly after it was changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc. During regression testing new software bugs or regressions may be uncovered. Sometimes a software change impact analysis is performed to determine what areas could be affected by the proposed changes. These areas may include functional and non-functional areas of the system.
  33. 33. Example of different types of system testing…………. Usability Testing:  Usability testing is a technique used in user- centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system. This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.
  34. 34. Example of different types of system testing………….. Software Performance Testing  performance testing is in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
  35. 35. Example of different types of system testing………… Compatibility Testing  Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment Exception Handling  Exception handling is the process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution
  36. 36. Example of different types of system testing……….. Load Testing  Load testing is the process of putting demand on a software system or computing device and measuring its response. Load testing is performed to determine a system's behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.
  37. 37. Example of different types of system testing…………. Stress Testing  Stress testing (sometimes called torture testing) is a form of deliberately intense or thorough testing used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Reasons can include:
  38. 38. Example of different types of system testing……….. Security Testing  Security testing is a process intended to reveal flaws in the security mechanisms of an information system that protect data and maintain functionality as intended. Due to the logical limitations of security testing, passing security testing is not an indication that no flaws exist or that the system adequately satisfies the security requirements.
  39. 39. Example of different types of system testing………… Recovery Testing  Recovery testing is the activity of testing how well an application is able to recover from crashes, hardware failures and other similar problems. Output Testing  In every testing part ultimately we are testing the outputs. After check all the testing part and fix all the bugs we have to re-check the system to ensure that our system working correctly.
  40. 40. Thank’s a Lot………….       