MCA –Software Engineering
Kantipur City College
Topics include
Validation Planning
Testing Fundamentals
Test plan creation
Test-case generation
 Black-box Testing
 White Box Testing
Unit Testing
Integration Testing
System testing
Object-oriented Testing
Verification Vs.
Validation Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification
People
Money
Machines
Materials
Building the right product
Building product right
Efficiency
making best use of
resources in achieving
goals
Effectiveness
choosing effective goals and
achieving them
Verification &
Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.
 Verification involves checking that
The software conforms to its specification.
System meets its specified functional and non-functional
requirements.
“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.
“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.
Verification &
Validation
Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions
Techniques of system
checking & Analysis
Static and Dynamic V &
V
Formal
specification
High-level
design
Requirements
specification
Detailed
design
Program
Prototype
Dynamic
validation
Static
verification
Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yet-
undiscovered error.
Software testing
priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing
Software Testability
Checklist
V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.
 Debugging
 A process that locates and corrects these defects
Locate
error
Design
error repair
Repair
error
Re-test
program
Test
results Specification Test
cases
Design test
cases
Prepare test
data
Runprogram
withtest data
Compare results
totest cases
Test
cases
Test
data
Test
results
Test
reports
The defect testing
process
Test data
Inputs which have been devised to test the system
Test cases
Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification
Project Planning
Plan Description
Quality Plan Describes the quality procedure and
standards that will be used in a project.
Validation Plan Describes the approach, resources and
schedule used for system validation.
Configuration
Management Plan
Describes the configuration management
procedures and structures to be used.
Maintenance Plan Predicts the maintenance requirements of
the system, maintenance costs and effort
required.
Staff development Plan Describes how the skills and experience of
the project team members will be developed.
Verification and
Validation Plan
Requirements
specification
System
specification
System
design
Detailed
design
Module and
unitcode
and tess
Sub-system
integration
test plan
System
integration
test plan
Acceptance
test plan
Service
Acceptance
test
System
integrationtest
Sub-system
integrationtest
Test Plan as a link between development and testing
Testing Process
Sub-system
testing
Module
testing
Unit
testing
System
testing
Acceptance
testing
Component
testing
Integration testing User
testing
Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing
 Component testing
Testing of individual program components
Usually the responsibility of the component developer
(except sometimes for critical systems)
Tests are derived from the developer’s experience
 Integration testing
Testing of groups of components integrated to create
a system or sub-system
The responsibility of an independent testing team
Tests are based on a system specification
The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present
The testing process
Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software i.e.,
information domain not the implementation part of the
software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.
I
e
Input test data
OeOutput test results
System
Inputs causing
anomalous
behaviour
Outputs which reveal
the presence of
defects
Black-box testing
Test are designed to answer the following questions:
 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?
Black-box testing
Advantages of Black box
testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.
Disadvantages of Black
box testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.
Black-box testing
techniques
 Graph-based testing methods
 Equivalence Partitioning,
 Boundary Value Analysis (BVA)
 Comparison Testing
 Orthogonal Array Testing.
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
If input condition specifies a range, one valid and two invalid
equivalence classes are defined
If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
If an input condition is Boolean, one valid and one invalid
equivalence class is defined
Equivalence
Partitioning
System
Outputs
Invalid inputs Valid inputs
Equivalence
Partitioning
Between 10000 and 99999Less than 10000 More than 99999
9999
10000 50000
100000
99999
Input values
Between 4 and 10Less than 4 More than 10
3
4 7
11
10
Number of input values
Equivalence
Partitioning
Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g. size
limitations), be certain to test the boundaries
Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.
Orthogonal Array
Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
Detect and isolate all single mode faults
Detect all double mode faults
Mutimode faults
White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
Guarantee that all independent paths within a module
have been exercised at least once.
Exercise all logical decisions on their true and false
sides.
Execute all loops at their boundaries and within their
operational bounds, and
Exercise internal data structures to ensure their
validity.
Techniques being used: basic path and control
structure testing.
Component
code
Test
outputs
Test data
DerivesTests
White-box or Glass Box
testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
Top-down integration
Bottom-up integration
Regression testing
Smoke testing
Integration Testing
Top-down testing
Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate
Bottom-up testing
Integrate individual components in levels until the
complete system is created
In practice, most integration involves a combination
of these strategies
Approaches to
integration testing
Level 2Level 2Level 2Level 2
Level 1 Level 1
Testing
sequence
Level 2
stubs
Level 3
stubs
. . .
Top-down testing
Level NLevel NLevel NLevel NLevel N
Level N–1 Level N–1Level N–1
Testing
sequence
Test
drivers
Test
drivers
Bottom-up testing
System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.
 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration
 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.
 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.
Object-oriented Testing
The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing
Acceptance Test
Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item
Test-item List
Item
No.
Test Item Sub –
item No.
Test-Sub Item Level
SR-02 Staff Review SR-02-01 Program Officer
Review
A
SR-02-02 Early Decline Report A
Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional
Testing Details
Item No SR-02-01 Test Date
Item Staff Review Sub-item PO Review
Report: Early Decline
Precondition
Test Procedure
Test Standard
Test description
Test Result and
Conclusion
 Passed
 Failed
Sin of the Tester Sign of the
Manager
SR-02 Staff Review
References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques
• Software Testing Fundamentals
• Test case design
• White-box testing- Basic path, Control Structure Testing
• Black-box testing
– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing
 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing

Mca se chapter_07_software_validation

  • 1.
  • 2.
    Topics include Validation Planning TestingFundamentals Test plan creation Test-case generation  Black-box Testing  White Box Testing Unit Testing Integration Testing System testing Object-oriented Testing
  • 3.
    Verification Vs. Validation Twoquestions  Are we building the right product ? => Validation  Are we building the product right ? = > Verification People Money Machines Materials Building the right product Building product right Efficiency making best use of resources in achieving goals Effectiveness choosing effective goals and achieving them
  • 4.
    Verification & Validation  SoftwareV & V is a disciplined approach to assessing software products throughout the SDLC.  V & V strives to ensure that quality is built into the software and that the software satisfies business functional requirements.  V & V is to ensures that software conforms to its specification and meets the needs of the customers.  V & V employs review, analysis, and testing techniques to determine whether a software product and its intermediate deliverables comply with requirements. These requirements include both business functional capabilities and quality attributes.  V & V provide management with insights into the state of the project and the software products, allowing for timely change in the products or in the SDLC approach.  V & V is typically applied in parallel with software development and support activities.
  • 5.
     Verification involveschecking that The software conforms to its specification. System meets its specified functional and non-functional requirements. “Are we building the product right ?”  Validation, a more general process ensure that the software meets the expectation of the customer. “Are we building the right product ?” You can't test in quality. If its not there before you begin testing, it won’t be there when you’re finished testing. Verification & Validation
  • 6.
    Techniques of system checking& Analysis  Software inspections  Concerned with analysis of the static system representation to discover problems (static verification) such as • Requirements document • Design diagrams and • Program source code  It do not require the system to be executed.  This techniques include program inspections, automated source code analysis and formal verification.  It can’t check the non-functional characteristics of the software such as its performance and reliability.
  • 7.
     Software testing It involves executing an implementation of the software with test data and examining the outputs of the software and its operational behavior to check that it is performing as required.  It is a dynamic techniques of verification and validation.  The system is executed with test data and its operational behaviour is observed.  Two distinct types of testing  Defect testing : to find inconsistencies between a program and its specification.  Statistical testing : to test program’s performance and reliability and to check how it works under operational conditions Techniques of system checking & Analysis
  • 8.
    Static and DynamicV & V Formal specification High-level design Requirements specification Detailed design Program Prototype Dynamic validation Static verification
  • 9.
    Software Testing fundamentals  Testingis a set of activities that can be planned in advance and conducted systematically.  Testing is the process of executing a program with the intent of finding errors.  A good test case is one with a high probability of finding an as-yet undiscovered error.  A successful test is one that discovers an as-yet- undiscovered error.
  • 10.
    Software testing priciples  Alltests should be traceable to customer requirements.  Tests should be planned long before testing begins.  The Pareto principle (80% of all errors will likely be found in 20% of the code) applies to software testing.  Testing should begin in the small and progress to the large.  Exhaustive testing is not possible.  To be most effective, testing should be conducted by an independent third party.
  • 11.
     Operability-the betterit works the more efficiently it can be tested  Observability-what you see is what you test  Controllability-the better software can be controlled the more testing can be automated and optimized  Decomposability-by controlling the scope of testing, the more quickly problems can be isolated and retested intelligently  Simplicity-the less there is to test, the more quickly we can test  Stability-the fewer the changes, the fewer the disruptions to testing  Understandability-the more information known, the smarter the testing Software Testability Checklist
  • 12.
    V&V Vs. Debugging Verification and validation  A process that establishes the existence of defects in a software system.  The ultimate goal of the V&V process is to establish confidence that the software system is “fit for purpose”.  Debugging  A process that locates and corrects these defects Locate error Design error repair Repair error Re-test program Test results Specification Test cases
  • 13.
    Design test cases Prepare test data Runprogram withtestdata Compare results totest cases Test cases Test data Test results Test reports The defect testing process Test data Inputs which have been devised to test the system Test cases Inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification
  • 14.
    Project Planning Plan Description QualityPlan Describes the quality procedure and standards that will be used in a project. Validation Plan Describes the approach, resources and schedule used for system validation. Configuration Management Plan Describes the configuration management procedures and structures to be used. Maintenance Plan Predicts the maintenance requirements of the system, maintenance costs and effort required. Staff development Plan Describes how the skills and experience of the project team members will be developed.
  • 15.
    Verification and Validation Plan Requirements specification System specification System design Detailed design Moduleand unitcode and tess Sub-system integration test plan System integration test plan Acceptance test plan Service Acceptance test System integrationtest Sub-system integrationtest Test Plan as a link between development and testing
  • 16.
  • 17.
    Testing Process  Unittesting - Individual components are tested independently, without other system components  Module testing - Related collections of dependent components( class, ADT, procedures & functions) are tested, without other system module.  Sub-system testing-Modules are integrated into sub-systems and tested. The focus here should be on interface testing to detect module interface errors or mismatches.  System testing - Testing of the system as a whole. Validating functional and non-functional requirements & Testing of emergent system properties.  Acceptance testing-Testing with customer data to check that it is acceptable. Also called Alpha Testing
  • 18.
     Component testing Testingof individual program components Usually the responsibility of the component developer (except sometimes for critical systems) Tests are derived from the developer’s experience  Integration testing Testing of groups of components integrated to create a system or sub-system The responsibility of an independent testing team Tests are based on a system specification The testing process
  • 19.
    Acceptance Testing Making surethe software works correctly for intended user in his or her normal work environment. Alpha test-version of the complete software is tested by customer under the supervision of the developer at the developer’s site. Beta test-version of the complete software is tested by customer at his or her own site without the developer being present The testing process
  • 20.
    Black-box testing  Alsoknown as behavioral or functional testing.  The system is a “Blackbox” whose behavior can be determined by studying its inputs and related outputs.  Knowing the specified function a product is to perform and demonstrating correct operation based solely on its specification without regard for its internal logic.  Focus on the functional requirements of the software i.e., information domain not the implementation part of the software and disregards control structure.  The program test cases are based on the system specification  It is performed during later stages of testing like in the acceptance testing or beta testing.
  • 21.
    I e Input test data OeOutputtest results System Inputs causing anomalous behaviour Outputs which reveal the presence of defects Black-box testing
  • 22.
    Test are designedto answer the following questions:  How is functional validity tested?  How is system behavior and performance tested?  What classes of input behavior will make good test case?  Is the system particularly sensitive to certain input values?  How are the boundaries of data class isolated?  What data rates and data volume can the system tolerate?  What effect will specific combinations of data have on system operations? Black-box testing
  • 23.
    Advantages of Blackbox testing  Validates whether or not a given system conforms to its software specification  Introduce a series of inputs to a system and compare the outputs to a pre-defined test specification.  Test integration between individual system components.  Tests are architecture independent — they do not concern themselves with how a given output is produced, only with whether that output is the desired and expected output.  Require no knowledge of the underlying system, one need not be a software engineer to design black box tests.
  • 24.
    Disadvantages of Black boxtesting  Offer no guarantee that every line of code has been tested.  Being architecture independent, it cannot determine the efficiency of the code.  Will not find any errors, such as memory leaks, that are not explicitly and instantly exposed by the application.
  • 25.
    Black-box testing techniques  Graph-basedtesting methods  Equivalence Partitioning,  Boundary Value Analysis (BVA)  Comparison Testing  Orthogonal Array Testing.
  • 26.
     Black-box techniquethat divides the input domain into classes of data from which test cases can be derived  An ideal test case uncovers a class of errors( incorrect processing of all incorrect data) that might require many arbitrary test cases to be executed before a general error is observed  Equivalence class guidelines: If input condition specifies a range, one valid and two invalid equivalence classes are defined If an input condition requires a specific value, one valid and two invalid equivalence classes are defined If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined If an input condition is Boolean, one valid and one invalid equivalence class is defined Equivalence Partitioning
  • 27.
    System Outputs Invalid inputs Validinputs Equivalence Partitioning
  • 28.
    Between 10000 and99999Less than 10000 More than 99999 9999 10000 50000 100000 99999 Input values Between 4 and 10Less than 4 More than 10 3 4 7 11 10 Number of input values Equivalence Partitioning
  • 29.
    Boundary Value Analysis(BVA)  Black-box technique that focuses on the boundaries of the input domain rather than its center  BVA guidelines:  If input condition specifies a range bounded by values a and b, test cases should include a and b, values just above and just below a and b  If an input condition specifies and number of values, test cases should be exercise the minimum and maximum numbers, as well as values just above and just below the minimum and maximum values  Apply guidelines 1 and 2 to output conditions, test cases should be designed to produce the minimum and maxim output reports  If internal program data structures have boundaries (e.g. size limitations), be certain to test the boundaries
  • 30.
    Comparison Testing  Alsocalled back-to-back testing.  Black-box testing for safety critical systems ( such as aircraft avionics, automobile braking system) in which independently developed implementations of redundant systems are tested for conformance to specifications  Often equivalence class partitioning is used to develop a common set of test cases for each implementation.
  • 31.
    Orthogonal Array Testing  Black-boxtechnique that enables the design of a reasonably small set of test cases that provide maximum test coverage  Focus is on categories of faulty logic likely to be present in the software component (without examining the code)  Priorities for assessing tests using an orthogonal array Detect and isolate all single mode faults Detect all double mode faults Mutimode faults
  • 32.
    White-box or GlassBox testing Knowing the internal workings of a product, tests are performed to check the workings of all independent logic paths. It derive test cases that: Guarantee that all independent paths within a module have been exercised at least once. Exercise all logical decisions on their true and false sides. Execute all loops at their boundaries and within their operational bounds, and Exercise internal data structures to ensure their validity. Techniques being used: basic path and control structure testing.
  • 33.
  • 34.
    Tests complete systemsor subsystems composed of integrated components Integration testing should be black-box testing with tests derived from the specification Main difficulty is localising errors Incremental integration testing reduces this problem. Incremental integration strategies include Top-down integration Bottom-up integration Regression testing Smoke testing Integration Testing
  • 35.
    Top-down testing Start withhigh-level system and integrate from the top-down replacing individual components by stubs where appropriate Bottom-up testing Integrate individual components in levels until the complete system is created In practice, most integration involves a combination of these strategies Approaches to integration testing
  • 36.
    Level 2Level 2Level2Level 2 Level 1 Level 1 Testing sequence Level 2 stubs Level 3 stubs . . . Top-down testing
  • 37.
    Level NLevel NLevelNLevel NLevel N Level N–1 Level N–1Level N–1 Testing sequence Test drivers Test drivers Bottom-up testing
  • 38.
    System Testing  Recoverytesting  Checks the system’s ability to recover from failures.  Security testing  Verifies that system protection mechanism prevent improper penetration or data alteration  Stress testing  Program is checked to see how well it deals with abnormal resource demands – quantity, frequency, or volume.  Performance testing  Designed to test the run-time performance of software, especially real-time software.
  • 39.
    Object-oriented Testing The componentsto be tested are object classes that are instantiated as objects Larger gain than individual functions so approaches to white-box testing have to be extended No obvious ‘top’ to the system for top-down integration and testing
  • 40.
    Acceptance Test Format  TestItem List  Identification of Test-item  Testing Detail  Detailed testing procedure  Testing Result  Summary of testing-item
  • 41.
    Test-item List Item No. Test ItemSub – item No. Test-Sub Item Level SR-02 Staff Review SR-02-01 Program Officer Review A SR-02-02 Early Decline Report A Test-Level A- Basic Function, compulsory B- Enhanced Function, compulsory C- Enhanced Function, optional
  • 42.
    Testing Details Item NoSR-02-01 Test Date Item Staff Review Sub-item PO Review Report: Early Decline Precondition Test Procedure Test Standard Test description Test Result and Conclusion  Passed  Failed Sin of the Tester Sign of the Manager SR-02 Staff Review
  • 43.
    References  From softwareengineering, A practitioner’s approach by Roger S. Pressman – Chapter 17: Software testing techniques • Software Testing Fundamentals • Test case design • White-box testing- Basic path, Control Structure Testing • Black-box testing – Chapter 18: Software Testing Strategies • A strategic approach to software testing • Unit, Integration, Validation, System testing  From Software Engineering, Ian Sommerville – Part5: Verification and Validation • Chapter 19: Verification and validation • Chapter 20: Software testing