Test Levels & Techniques

  • 7,703 views
Uploaded on

 

More in: Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
7,703
On Slideshare
0
From Embeds
0
Number of Embeds
3

Actions

Shares
Downloads
480
Comments
0
Likes
2

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Static Testing
    • Static testing refers to testing that takes place without Execution - examining and reviewing it.
    • Dynamic Testing
    • Dynamic testing is what you would normally think of testing - executing and using the software.
  • 2. Dynamic Testing
    • Techniques used are determined by the type of testing that must be conducted
      • Functional
      • Structural
  • 3. Functional Testing
    • Structure of the program is not considered.
    • Test cases are decided based on the requirements or specification of the program or module
    • Hence it is often called as “Black Box Testing”.
  • 4. Structural Testing
    • Concerned with testing ,the implementation of the program.
    • Focus on the internal structure of the program.
    • The intent of structural testing is not to exercise all the different input or output condition but to exercise the different programming structures and data structures in the program.
  • 5.
    • Phases of software testing:
      • Unit Testing
      • Integration/Build Testing
      • Validation/Functional Testing
      • System Testing
      • Acceptance Testing
    Testing Levels
  • 6. Unit Testing
    • Test each module individually.
    • Follows a white box testing (Logic of the program)
  • 7. Integration testing
    • Integrate two or more module.i.e. communicate between the modules.
    • Follow a white box testing (Testing the code)
  • 8. Integration Testing - Overview
  • 9. System Testing
    • Confirms that the system as a whole delivers the functionality originally required.
    • Follows the black box testing.
  • 10. System Testing - Goals
    • Incorrect or missing feature
    • Performance errors
    • Security errors
    • User interface errors
    • Configuration and compatibility
    • Compliance to required standards
  • 11. System Testing - Overview
  • 12. System Testing - Overview
    • System Tests:
      • Define test procedures and instructions
      • Review test plans
      • Execute test plans
      • Record results
  • 13. System Testing - Strategy
    • Developing System Tests
      • stress tests
      • security tests
      • recovery tests
      • performance tests
  • 14. User Acceptance Testing (UAT)
    • Building the confidence of the client and users is the role of the acceptance test phase.
    • It is depend on the business scenario.
    • Those functions required by the customer to demonstrate sufficient functionality and reliability to warrant acceptance.
  • 15. V-Model Unit Integration System UAT LLD HLD SRS Business scenario
  • 16. Software Testing
  • 17. Testing Techniques
    • White Box
    • Black Box
    • Incremental
    • Thread
  • 18. White Box Testing
  • 19. White Box Testing - Internal
    • Purpose of Unit Tests:
      • Test all loops
      • Test Basis paths
      • Test conditional statements
      • Test data structures
      • * develop unit tests after the design portion of the Build Definition Spec. is completed.
  • 20. White Box Testing - Internal
  • 21. White Box Testing Techniques
    • Statement coverage – execute all statements at least once.
    • Decision coverage - execute each decision direction at least once.
    • Condition coverage – execute each decision with all possible outcomes at least once
  • 22. White Box Testing Techniques
    • Loop Testing:
      • Simple Loops
      • Nested Loops
      • Concatenated Loops
      • Unstructured Loops
  • 23. Black Box Testing
  • 24. Black Box Testing - External
    • Black Box Testing verifies that the requirements have been met.
      • How is functional validity tested
      • How is system behavior and performance tested.
      • What classes of input will make good test cases?
      • Is the system particularly sensitive to certain input values?
      • How are the boundaries of a data class isolated?
      • What data rates and data volume can the system tolerate?
      • What effect will specific combinations of data have on system operation?
  • 25. Black Box Testing - External
    • Black Box Tests look for:
      • incorrect or missing functions
      • interface errors
      • errors in external database access
      • behavior or performance errors
  • 26. Black Box Testing Techniques
      • * Equivalence Partitioning
      • * Boundary Analysis
      • * Error Guessing
  • 27. Equivalence Partitioning
    • A subset of data that is representative of a larger class
    • For example, a program which edits credit limits within a given range ($10,000 - $15,000 would have 3 equivalence classes:
      • Less than $10,000 (invalid)
      • Between $10,000 and $15,000 (valid)
      • Greater than $15,000 (invalid)
  • 28. Boundary Analysis
    • A technique that consists of developing test cases and data that focus on the input and output boundaries of a given function
  • 29. Boundary Analysis continued...
    • In the same credit limit example, boundary analysis would test:
      • Low boundary plus or minus one ($9,999 and $10,001)
      • On the boundary ($10,000 and $15,000)
      • Upper boundary plus or minus one ($14,999 and $15,001)
  • 30. Error Guessing
    • Based on the theory that test cases can be developed based on experience of the Test Engineer
    • For example, in an example where one of the inputs is the date, a test engineer might try February 29,2000 or 9/9/99
  • 31. Incremental Testing
    • A disciplined method of testing the interfaces between unit-tested programs as well as between system components
    • Incremental Testing Types
    • - Top-down
    • - Bottom-up
  • 32. Top-Down
    • Begins testing from the top of the module hierarchy and works down to the bottom using interim stubs to simulate lower interfacing modules or programs
  • 33. Bottom-Up
    • Begins testing from the bottom of the hierarchy and works up to the top
    • Bottom-up testing requires the development of driver modules which provide the test input, call the module or program being testing, and display test output
  • 34.
    • A technique, often used during early integration testing
    • Demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application
    Thread Testing
  • 35. Illustrates various techniques used throughout the test stages