SlideShare a Scribd company logo
1 of 64
Download to read offline
Object-Oriented Software Engineering
Chapter 6
Object Oriented Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
An Overview of Testing
• Testing is the process of finding differences between the expected
behavior specified by system models and the observed behavior of
the system
• The goal of testing is to design tests that detects defects in the
system and to reveal problems.
• To test a system effectively,
– a tester must have a detailed understanding of the whole system,
ranging from the requirements to system design decisions and
implementation issues.
• A tester must also be knowledgeable of testing techniques and
apply these techniques effectively and efficiently to meet time,
budget, and quality constraints.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
An Overview of Testing
• Testing is
– a process of demonstrating that errors are not
present”
– a systematic attempt to find errors in a planned
way
• Software Testing answers the question
“Does the s/w behave as
specified…?”
Testing is a process used to identify the correctness,
completeness and quality of developed computer
software.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
• One of the practical methods commonly used to detect the presence
of errors (failures) in a computer program is to test it for a set of
inputs.
Our program
The output
is correct?I1, I2, I3,
…, In, …
Expected results
= ?
Obtained results
“Inputs”
Testing………….?Testing………….?
Developer Independent Tester
 Understands the
system,
 tests "gently”,
 Test is driven by
“delivery”.
 learns about the system,
 attempts to break the
system
 Test is driven by
“quality”.
Who Tests the Software….?Who Tests the Software….?
Errors
Requirements conformance
Performance
An indication
of quality
What Testing Shows…?What Testing Shows…?
7
• An early start to testing reduces the cost, time to rework and
error free software that is delivered to the client.
• However in Software Development Life Cycle (SDLC) testing can be
started from the Requirements Gathering phase and lasts till the
deployment of the software.
• However it also depends on the development model that is being
used.
• For example in Water fall model formal testing is conducted in the
Testing phase,
• But in incremental model, testing is performed at the end of every
increment/iteration and at the end the whole application is tested.
When to start Testing …?When to start Testing …?
8
• Unlike when to start testing it is difficult to determine when to
stop testing, as testing is a never ending process and no one can
say that any software is 100% tested.
• Following are the aspects which should be considered to stop
the testing:
• Testing Deadlines.
• Management decision
• Completion of test case execution.
• Completion of Functional and code coverage to a certain point.
• Bug rate falls below a certain level
When to stop Testing …?When to stop Testing …?
An Overview of Testing
 Unit testing finds differences between the object design model and
its corresponding component.
 Structural testing finds differences between the system design
model and a subset of integrated subsystems.
 Functional testing finds differences between the use case model
and the system.
 Finally, performance testing finds differences between
nonfunctional requirements and actual system performance.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Software Reliability
• Software reliability is the probability that a software system will
not cause the failure of the system for a specified time under
specified conditions [IEEE Std. 982-1989].
• The goal of testing is to maximize the number of discovered faults,
and increase the reliability of the system.
• The three classes of techniques for avoiding faults, detecting faults,
and tolerating faults are
1. Fault Avoidance Techniques
2. Fault Detection Techniques
3. Fault Tolerance Techniques
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Quality control Techniques
1. Fault Avoidance Techniques
Fault avoidance tries to prevent the occurrence of errors and failures by
finding faults in the system before it is released.
• Fault avoidance techniques include:
1. • development methodologies
2. • configuration management
3. • verification techniques
4. • reviews of the system models, in particular the code model.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques
Development methodologies avoid faults by providing techniques
that minimize fault introduction in the system models and code.
Such techniques include
 the unambiguous representation of requirements,
the use of data abstraction and data encapsulation,
 minimizing of the coupling between subsystems
maximizing of subsystem coherence,
 the early definition of subsystem interfaces, and
the capture of rationale information for maintenance
activities.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques
Configuration management avoids faults caused by undisciplined
change in the system models.
• it is a common mistake to a change a subsystem interface without
notifying all developers of calling components.
• Configuration management can make sure that, if analysis models
and code are becoming inconsistent with one another, analysts and
implementer are notified.
Verification attempts to find faults before any execution of the system.
• possible for small operating system kernel
• It has limits.
• Difficult to verify quality of large complex systems.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques
• A review is the manual inspection of parts or all aspects of the
system without actually executing the system.
• There are two types of reviews:
– walkthrough
– inspection.
• Walkthrough, the developer informally presents the API, the code,
and associated documentation of the component to the review
team.
• The review team makes comments on the mapping of the analysis
and object design to the code using use cases and scenarios from the
analysis phase.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques
• An inspection is similar to a walkthrough, but the presentation of the
unit is formal.
• In fact, in a code inspection, the developer is not allowed to present
the artifacts (models, code, and documentation).
• This is done by the review team, which is responsible for checking
the interface and code of the component against the requirements.
• It also checks the algorithms for efficiency with respect to the
nonfunctional requirements.
• Finally, it checks comments about the code and compares them with
the code itself to find inaccurate and incomplete comments.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault Avoidance Techniques
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault detection techniques
• Fault detection techniques, such as debugging and testing, are used
during the development process to identify errors and find the
underlying faults before releasing the system.
• Debugging assumes that
– faults can be found by starting from an unplanned failure.
– developer moves the system through a succession of states,
ultimately arriving at and identifying the erroneous state.
There are two types of debugging:
• The goal of correctness debugging is to find any deviation between
the observed and specified functional requirements.
• Performance debugging addresses the deviation between observed
and specified nonfunctional requirements, such as response time.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault detection techniques
• Testing
– fault detection technique that tries to create failures or errors in
a planned way.
– allows the developer to detect failures in the system before it is
released to the customer.
• Unit testing tries to find faults in participating objects and/or
subsystems with respect to the use cases from the use case model
• Integration testing is the activity of finding faults when testing the
individually tested components together, for example, subsystems
described in the subsystem decomposition, while executing the use
cases and scenarios from the RAD.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Fault detection techniques
• System testing tests all the components together, seen as a single
system to identify errors with respect to the scenarios from the
problem statement and the requirements and design goals identified
in the analysis and system design, respectively:
• Functional testing tests the requirements from the RAD and, if
available, from the user manual.
• Performance testing checks the nonfunctional requirements and
additional design goals from the SDD. Note that functional and
performance testing are both done by developers.
• Acceptance testing and installation testing check the requirements
against the project agreement and should be done by the client, if
necessary with support by the developers.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
21
Levels of Testing
Information needed at different Levels of Testing
22
23
Logical View
24
• A unit is the smallest testable part of software.
• In procedural programming a unit may be an individual
program, function, procedure, etc. In OOP, the smallest unit is a
method.
– Unit testing is often neglected but it is, in fact, the most
important level of testing.
Unit Testing
25
METHOD
• Unit Testing is performed by using
the method White Box Testing
When is it performed?
• First level of testing & performed
prior to Integration Testing
Who performs it?
• performed by software developers
themselves or their peers.
Unit Testing
26
 Integration Testing is a level of the software testing process
where individual units are combined and tested as a group.
Integration testing tests interface between components,
interaction to different parts of system.
Integration Testing
27
Big Bang is an approach to
Integration Testing where all or most
of the units are combined together
and tested at one go.
This approach is taken when the
testing team receives the entire
software in a bundle.
Top Down is an approach to
Integration Testing where top level
units are tested first and lower level
units are tested step by step after
that.
This approach is taken when top
down development approach is
followed.
Integration Testing Approaches
28
Bottom Up is an approach to
Integration Testing where
bottom level units are tested
first and upper level units
step by step after that. This
approach is taken when
bottom up development
approach is followed.
Sandwich/Hybrid is an approach
to Integration Testing which
is a combination of Top
Down and Bottom Up
approaches.
Integration Testing
Approaches
System Testing
29
30
Functional Testing:
 Goal: Test functionality of system
 Test cases are designed from the requirements analysis
document (better: user manual) and centered around
requirements and key functions (use cases).The system is
treated as black box
 Unit test cases can be reused, but new test cases have to be
developed as well.
Performance Testing:
 Goal: Try to violate non-functional requirements
 Test how the system behaves when overloaded.
 Try unusual orders of execution
 Check the system’s response to large volumes of data
 What is the amount of time spent in different use cases?
System Testing
Types of Performance Testing
• Stress Testing
– Stress limits of system
• Volume testing
– Test what happens if large
amounts of data are handled
• Configuration testing
– Test the various software and
hardware configurations
• Compatibility test
– Test backward compatibility
with existing systems
• Timing testing
– Evaluate response times and
time to perform a function
• Security testing
– Try to violate security
requirements
• Environmental test
– Test tolerances for heat,
humidity, motion
• Quality testing
– Test reliability, maintain-
ability & availability
• Recovery testing
– Test system’s response to
presence of errors or loss
of data
• Human factors testing
– Test with end users.
Acceptance Testing
 Goal: Demonstrate
system is ready for
operational use
 Choice of tests is
made by client
 Many tests can be
taken from integration
testing
 Acceptance test is
performed by the
client, not by the
developer.
 Alpha test:
Client uses the software at
the developer’s environment.
Software used in a controlled
setting, with the developer
always ready to fix bugs.
 Beta test:
Conducted at client’s
environment (developer is not
present)
Software gets a realistic
workout in target environment
Fault tolerance techniques
Fault tolerance techniques assume that a system can be released
with errors and that system failures can be dealt with by
recovering from them at run time.
• Fault tolerance is the recovery from failure while the system is
executing.
• A component is a part of the system that can be isolated for
testing. A component can be an object, a group of objects, or
one or more subsystems.
• A fault, also called bug or defect, is a design or coding mistake
that may cause abnormal component behavior.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Testing Concepts
• An error is a manifestation of a fault during the execution of the
system.
• A failure is a deviation between the specification of a component and
its behavior. A failure is triggered by one or more errors.
• A test case is a set of inputs and expected results that exercises a
component with the purpose of causing failures and detecting faults.
• A test stub is a partial implementation of components on which the
tested component depends.
• A test driver is a partial implementation of a component that
depends on the tested component. Test stubs and drivers enable
components to be isolated from the rest of the system for testing.
• •A correction is a change to a component. The purpose of a
correction is to repair a fault.
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Model elements used during Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
36
Errors
An error is a mistake, misconception, or misunderstanding on the part
of a software developer.
Faults /Defects /Bugs
• A fault (defect) is introduced into the software as the result of an
error.
• It is an anomaly in the software that may cause it to behave
incorrectly, and not according to its specification.
Failures
• A failure is the inability of a software system or component to
perform its required functions within specified performance
requirements
A fault in the code does not always produce a failure.
Faults Failures & ErrorsFaults Failures & Errors
37
LOC Code
1 program double ();
2 var x,y: integer;
3 begin
4 read(x);
5 y := x * x;
6 write(y)
7 end
Fault: The fault that causes the failure is in
line 5. The * operator is used instead of +.
Error: The error that conduces to this fault
may be:
• a typing error (the developer has
written * instead of +)
• a conceptual error (e.g., the developer
doesn't know how to double a number)
Failure: x = 3 means y =9 Failure!
• This is a failure of the system since the
correct output would be 6
Fault - Failure - Error - illustrationsFault - Failure - Error - illustrations
Fault error failure
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
An Error due to
mechanical cause or
earth quake etc
Example of Fault
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Test cases
A test case is a set of input data and expected results that
exercises a component with the purpose of causing failures and
detecting faults.
A test case has five attributes:
1. Name
2. Location
3. Input
4. oracle, and
5. log (Table 9-1).
The name of the test case allows the tester to distinguish between
different test cases.
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Test Cases: A test model where TestA must precede TestB and TestC.,
TestA consists of TestA1 and TestA2, meaning that once TestA1and
TestA2 are tested, TestA is tested; there is not separate test for TestA.
A good test model has as few associations as possible, because tests
that are not associated with each other can be executed independently
from each other. This allows a tester to speed up testing.
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Test Cases: Depending on which aspect of the system model is
tested Test cases are classified into
 black box tests and
 white box tests,.
Black box tests
 focus on the input/output behavior of the component.
 do not deal with the internal aspects of the component nor
with the behavior or the structure of the components.
White box tests
 focus on the internal structure of the component.
 makes sure that, independently from the particular
input/output behavior
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Correction
 is a change to a component whose purpose is to repair a
fault.
 can range from a simple modification to a single
component to a complete redesign of a data structure or a
subsystem
Several techniques can be used to handle such faults:
a. Problem tracking includes the documentation of each failure,
error, and fault detected, its correction, and the revisions of
the components involved in the change.
b. Regression testing includes the re-execution of all prior tests after a
change. Regression testing is important in object-oriented methods.
Regression testing is costly,
Testing Concepts
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
c. Rationale maintenance includes the documentation of the
rationale of the change and its relationship with the rationale of
the revised component.
Rationale maintenance enables developers to avoid introducing
new faults by inspecting the assumptions that were used to build
the component.
Documenting Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Testing activities are documented in four types of documents,
1. the Test Plan,
2. the Test Case Specifications,
3. the Test Incident Reports, and
4. the Test Summary Report
Test Plan. The Test Plan focuses on the managerial aspects of testing. It
documents the scope, approach, resources, and schedule of testing
activities. The requirements and the components to be tested are
identified in this document.
The Test Plan (TP) and the Test Case Specifications (TCP) are written
early in the process, as soon as the test planning and each test case are
completed. These documents are under configuration management and
updated as the system models change.
Documenting Testing
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
Student Reading Assignment
&
Reference
Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
48
1. ‘‘If an input condition for the software-under-test is specified as a
range of values, select one valid equivalence class that covers the
allowed range and two invalid equivalence classes, one outside each
end of the range.’’
 For example, suppose the specification for a module says that an
input, the length of a Wire in millimetres, lies in the range 1–499;
then select
 One valid equivalence class that includes all values from 1 to
499.
 Two invalid classes
 That consists of all values less than 1, and
 That consists of all values greater than 499.
Equivalence Partitioning
49
2. ‘‘If an input condition for the software-under-test is specified as a
number of values, then select one valid equivalence class that includes
the allowed number of values and two invalid equivalence classes that
are outside each end of the allowed number.’’
For example, if the specification for a real estate-related module says that
a house can have one to four owners, then we select
– One valid equivalence class that includes all the valid
number of owners
– Two invalid equivalence classes
• Less than one owner and
• More than four owners.
Equivalence Partitioning
50
3. ‘‘If an input condition for the software-under-test is specified as a set
of valid input values, then select one valid equivalence class that
contains all the members of the set and one invalid equivalence class
for any value outside the set.’’
For example, if the specification for a paint module states that the
colours RED, BLUE, GREEN and YELLOW are allowed as inputs, then
select
– One valid equivalence class that includes the set RED,
BLUE, GREEN and YELLOW, and
– One invalid equivalence class for all other inputs.
Equivalence Partitioning
51
4. ‘‘If an input condition for the software-under-test is specified as a “must
be” condition, select one valid equivalence class to represent the “must
be” condition and one invalid class that does not include the “must be”
condition.’’
For example, if the specification for a module states that the first
character of a part identifier must be a letter, then select
• One valid equivalence class where the first character is a letter
and
• One invalid class where the first character is not a letter.
Equivalence Partitioning
52
Write Test Cases using Equivalence Partitioning for a requirement that is
stated as follows:
“In the examination grading system, if the student scores 0 to less than 40
then assign E Grade, if the student scores between 40 to 49 then assign
D Grade, if the student scores between 50 to 69 then assign C Grade, if
the student scores between 70 to 84 then assign B Grade, and if the
student scores 85 to 100 then assign A Grade.”
In the above problem definition, after analysis, we identify set of Output
Values and corresponding set of input values producing same output.
This analysis results in:
Values from 0 to 39 produce E Grade
Values from 40 to 49 produce D Grade
Values from 50 to 69 produce C Grade
Values from 70 to 84 produce B Grade
Values from 85 to 100 produce A Grade
Equivalence Partitioning - An Illustration
53
Based on EP, we identify following input values to test each boundary:
For EP in range 0 to 39:
EP for producing E Grade, input values are 0 to 39,
Here Minimum Value= 0, Maximum Value = 39, Precision is 1.
Thus, input values for testing for this EP for Grade E are:
Minimum Value= 0
(Minimum Value+ precision)= 1
(Maximum Value- precision)= 38
Maximum Value= 39
Thus, input values for Boundary Values 0 and 39 are:
0, 1, 38, 39
Output Value is: Grade E
Equivalence Partitioning - An Illustration
54
On the similar lines of analysis, we arrive at following input values for
other
EPs and corresponding outputs as:
For 40 to 49, we have 40, 41, 48, 49 and Output Value is “Grade D”
For 50 to 69, we have 50, 51, 68, 69 and Output Value is “Grade C”
For 70 to 84, we have 70, 71, 83, 84 and Output Value is “Grade B”
For 85 to 100, we have 85, 86, 99, 100 and Output Value is “Grade A”
In Addition to this, we have EP with input values -1 and 101 and
corresponding output value is “Error”.
Thus for -1 and 101, we have Output Value as “Error”
Equivalence Partitioning - An Illustration
55
For these boundaries, test cases based on EP technique are documented
in Table. Test Case Design for a given example using EP Technique
Equivalence Partitioning - An Illustration
56
Boundary Value Analysis (BVA) technique is a test case design technique
that allows to identify appropriate number of input values to be
supplied to test the system.
 Human brain is evolved in emotional side and less evolved in logical side
 Hence during software development, people always get confused in
using < and <=; > and >= relational operators.
 As such large number of errors tends to occur at boundaries of the input
domain.
 BVA leads to selection of test cases that exercise boundary values.
 BVA based test case design helps to write test cases that exercise
bounding values.
Boundary Value Analysis
57
“Bugs lurk in corners and
congregate at boundaries…”
---Boris Beizer
The test cases developed based on equivalence class partitioning can be
strengthened by use of a technique called boundary value analysis.
Many defects occur directly on, and above and below, the edges of
equivalence classes.
BVA is a software design technique to determine test cases off-by-one
errors
Boundary Value Analysis
58
1. If an input condition for the software-under-test is specified as a
range of values, develop valid test cases for the ends of the range,
and invalid test cases for possibilities just above and below the
ends of the range.
For example if a specification states that an input value for a module
must lie in the range between -1.0 and +1.0,
Valid tests that include values -1.0, +1.0 for ends of the range, as well as
Invalid test cases for values just above and below the ends -1.1, +1.1,
should be included.
Boundary Value Analysis
59
2. If an input condition for the software-under-test is specified as a
number of values, develop valid test cases for the minimum and
maximum numbers as well as invalid test cases that include one lesser
than minimum and one greater than the maximum
For example, for the real-estate module mentioned previously that
specified a house can have four owners,
Valid Tests that includes tests for minimum and maximum values 1, 5
owners
Invalid tests that include one lesser than minimum and one greater
than maximum 0, 6 are developed.
Boundary Value Analysis
60
If the input or output of the software-under-test is an ordered set, such
as a table or a linear list, develop tests that focus on the first and last
elements of the set
Example: Loan application
Boundary Value Analysis
61
Using BVA- Range
When the programming element is a range type, we can arrive at test cases
using BVA as follows:
For a range of values bounded by a and b, then test:
(Minimum Value - precision)
Minimum Value
(Minimum Value + precision),
(Maximum Value - precision)
Maximum Value
(Maximum Value + precision)
Boundary Value Analysis
62
Write Test Cases using BVA for a requirement that is stated as follows:
“In the examination grading system, if the student scores 0 to less than 40 then
assign E Grade, if the student scores between 40 to 49 then assign D Grade, if the
student scores between 50 to 69 then assign C Grade, if the student scores
between 70 to 84 then assign B Grade, and if the student scores 85 to 100 then
assign A Grade.”
In the above problem definition, after analysis, we identify following Boundary
Values:
0 to 39
40 to 49
50 to 69
70 to 84
85 to 100
BVA - An Illustration
63
Based on BVA, we identify following input values to test each boundary: For
Boundary Values in range 0 to 39:
For 0 to 39 boundary values, Minimum Value= 0, Maximum Value = 39,
Precision is 1.
Thus, input values for testing for this boundary values are:
(Minimum Value-precision)= -1 Minimum Value= 0
(Minimum Value+ precision)= 1 (Maximum Value- precision)= 38
Maximum Value= 39 (Maximum Value + precision)= 40
Thus, input values for Boundary Values 0 and 39 are:
-1, 0, 1, 38, 39, 40.
On the similar lines of analysis, we arrive at following input values for other
Boundary values:
For 40 to 49, we have 39, 40, 41, 48, 49, 50
For 50 to 69, we have 49, 50, 51, 58, 59, 60
For 70 to 84, we have 69, 70, 71, 83, 84, 85
For 85 to 100, we have 84, 85, 86, 99, 100, 101
BVA - An Illustration
64
BVA – Test Case Design

More Related Content

What's hot

REVIEW TECHNIQUES.pptx
REVIEW TECHNIQUES.pptxREVIEW TECHNIQUES.pptx
REVIEW TECHNIQUES.pptxnavikvel
 
Software Testing Techniques
Software Testing TechniquesSoftware Testing Techniques
Software Testing TechniquesKiran Kumar
 
Control Flow Testing
Control Flow TestingControl Flow Testing
Control Flow TestingHirra Sultan
 
Ch 9 traceability and verification
Ch 9 traceability and verificationCh 9 traceability and verification
Ch 9 traceability and verificationKittitouch Suteeca
 
Risk-based Testing
Risk-based TestingRisk-based Testing
Risk-based TestingJohan Hoberg
 
Software Testing - Software Quality (Part 2)
Software Testing - Software Quality (Part 2)Software Testing - Software Quality (Part 2)
Software Testing - Software Quality (Part 2)Ajeng Savitri
 
Software review
Software reviewSoftware review
Software reviewamjad_09
 
Software Testing Principles
Software Testing PrinciplesSoftware Testing Principles
Software Testing PrinciplesKanoah
 
Integration tests: use the containers, Luke!
Integration tests: use the containers, Luke!Integration tests: use the containers, Luke!
Integration tests: use the containers, Luke!Roberto Franchini
 
SEMINARIO WEB EN VIVO: INTRODUCCIÓN AL AGILE TESTING
SEMINARIO WEB EN VIVO: INTRODUCCIÓN AL AGILE TESTINGSEMINARIO WEB EN VIVO: INTRODUCCIÓN AL AGILE TESTING
SEMINARIO WEB EN VIVO: INTRODUCCIÓN AL AGILE TESTINGtbaires
 
Bug reporting and tracking
Bug reporting and trackingBug reporting and tracking
Bug reporting and trackingVadym Muliavka
 
Code Quality in Ruby and Java
Code Quality in Ruby and JavaCode Quality in Ruby and Java
Code Quality in Ruby and JavaSteve Hayes
 
IT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTINGIT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTINGSathya R
 
Boundary Value Analysis and Equivalence class Partitioning Testing.pptx
Boundary Value Analysis and Equivalence class Partitioning Testing.pptxBoundary Value Analysis and Equivalence class Partitioning Testing.pptx
Boundary Value Analysis and Equivalence class Partitioning Testing.pptxlandesc
 
Best Practices for Test Case Writing
Best Practices for Test Case WritingBest Practices for Test Case Writing
Best Practices for Test Case WritingSarah Goldberg
 
Certificações em Teste e Qualidade de Software
Certificações em Teste e Qualidade de SoftwareCertificações em Teste e Qualidade de Software
Certificações em Teste e Qualidade de SoftwareCamilo Ribeiro
 

What's hot (20)

Code coverage
Code coverageCode coverage
Code coverage
 
REVIEW TECHNIQUES.pptx
REVIEW TECHNIQUES.pptxREVIEW TECHNIQUES.pptx
REVIEW TECHNIQUES.pptx
 
Software Testing Techniques
Software Testing TechniquesSoftware Testing Techniques
Software Testing Techniques
 
Control Flow Testing
Control Flow TestingControl Flow Testing
Control Flow Testing
 
Ch 9 traceability and verification
Ch 9 traceability and verificationCh 9 traceability and verification
Ch 9 traceability and verification
 
Risk-based Testing
Risk-based TestingRisk-based Testing
Risk-based Testing
 
Role of 3 I.pdf
Role of 3 I.pdfRole of 3 I.pdf
Role of 3 I.pdf
 
Software Testing - Software Quality (Part 2)
Software Testing - Software Quality (Part 2)Software Testing - Software Quality (Part 2)
Software Testing - Software Quality (Part 2)
 
Software review
Software reviewSoftware review
Software review
 
Software Testing Principles
Software Testing PrinciplesSoftware Testing Principles
Software Testing Principles
 
Integration tests: use the containers, Luke!
Integration tests: use the containers, Luke!Integration tests: use the containers, Luke!
Integration tests: use the containers, Luke!
 
Test case development
Test case developmentTest case development
Test case development
 
SEMINARIO WEB EN VIVO: INTRODUCCIÓN AL AGILE TESTING
SEMINARIO WEB EN VIVO: INTRODUCCIÓN AL AGILE TESTINGSEMINARIO WEB EN VIVO: INTRODUCCIÓN AL AGILE TESTING
SEMINARIO WEB EN VIVO: INTRODUCCIÓN AL AGILE TESTING
 
Bug reporting and tracking
Bug reporting and trackingBug reporting and tracking
Bug reporting and tracking
 
Software bugs
Software bugsSoftware bugs
Software bugs
 
Code Quality in Ruby and Java
Code Quality in Ruby and JavaCode Quality in Ruby and Java
Code Quality in Ruby and Java
 
IT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTINGIT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTING
 
Boundary Value Analysis and Equivalence class Partitioning Testing.pptx
Boundary Value Analysis and Equivalence class Partitioning Testing.pptxBoundary Value Analysis and Equivalence class Partitioning Testing.pptx
Boundary Value Analysis and Equivalence class Partitioning Testing.pptx
 
Best Practices for Test Case Writing
Best Practices for Test Case WritingBest Practices for Test Case Writing
Best Practices for Test Case Writing
 
Certificações em Teste e Qualidade de Software
Certificações em Teste e Qualidade de SoftwareCertificações em Teste e Qualidade de Software
Certificações em Teste e Qualidade de Software
 

Viewers also liked (16)

5. oose design new copy
5. oose design new   copy5. oose design new   copy
5. oose design new copy
 
Thank You...
Thank You...Thank You...
Thank You...
 
Cv2016 TESHIMA ATSUSHI
Cv2016 TESHIMA ATSUSHICv2016 TESHIMA ATSUSHI
Cv2016 TESHIMA ATSUSHI
 
2 uml
2 uml2 uml
2 uml
 
4. oose analysis
4. oose analysis4. oose analysis
4. oose analysis
 
Ooad
OoadOoad
Ooad
 
Recognitions_Career
Recognitions_CareerRecognitions_Career
Recognitions_Career
 
3. 1 req elicitation
3. 1 req elicitation3. 1 req elicitation
3. 1 req elicitation
 
Recognitions_Career
Recognitions_CareerRecognitions_Career
Recognitions_Career
 
Uml examples
Uml examplesUml examples
Uml examples
 
1. oose
1. oose1. oose
1. oose
 
Uml tutorial
Uml tutorialUml tutorial
Uml tutorial
 
2.1 usecase diagram
2.1 usecase diagram2.1 usecase diagram
2.1 usecase diagram
 
Cv2015 atsushi teshima
Cv2015 atsushi teshimaCv2015 atsushi teshima
Cv2015 atsushi teshima
 
3. 2 req elicitation activities
3. 2  req elicitation activities3. 2  req elicitation activities
3. 2 req elicitation activities
 
Ooad 2
Ooad 2Ooad 2
Ooad 2
 

Similar to 6. oose testing

unit-2_20-july-2018 (1).pptx
unit-2_20-july-2018 (1).pptxunit-2_20-july-2018 (1).pptx
unit-2_20-july-2018 (1).pptxPriyaFulpagare1
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality AssuranceSaqib Raza
 
Object Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slidesObject Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slidesPunjab University
 
Software Engineering (Testing Overview)
Software Engineering (Testing Overview)Software Engineering (Testing Overview)
Software Engineering (Testing Overview)ShudipPal
 
Software testing-and-analysis
Software testing-and-analysisSoftware testing-and-analysis
Software testing-and-analysisWBUTTUTORIALS
 
Objectorientedtesting 160320132146
Objectorientedtesting 160320132146Objectorientedtesting 160320132146
Objectorientedtesting 160320132146vidhyyav
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testingHaris Jamil
 
Software testing methods, levels and types
Software testing methods, levels and typesSoftware testing methods, levels and types
Software testing methods, levels and typesConfiz
 
Testing throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & ImplementationTesting throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & Implementationyogi syafrialdi
 
SENG202-v-and-v-modeling_121810.pptx
SENG202-v-and-v-modeling_121810.pptxSENG202-v-and-v-modeling_121810.pptx
SENG202-v-and-v-modeling_121810.pptxMinsasWorld
 
testing strategies and tactics
 testing strategies and tactics testing strategies and tactics
testing strategies and tacticsPreeti Mishra
 
Unit iv-testing-pune-university-sres-coe
Unit iv-testing-pune-university-sres-coeUnit iv-testing-pune-university-sres-coe
Unit iv-testing-pune-university-sres-coeHitesh Mohapatra
 
Chapter 13 software testing strategies
Chapter 13 software testing strategiesChapter 13 software testing strategies
Chapter 13 software testing strategiesSHREEHARI WADAWADAGI
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTINGacemindia
 

Similar to 6. oose testing (20)

Testing fundamentals
Testing fundamentalsTesting fundamentals
Testing fundamentals
 
unit-2_20-july-2018 (1).pptx
unit-2_20-july-2018 (1).pptxunit-2_20-july-2018 (1).pptx
unit-2_20-july-2018 (1).pptx
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
Different Types Of Testing
Different Types Of TestingDifferent Types Of Testing
Different Types Of Testing
 
Object Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slidesObject Oriented Testing(OOT) presentation slides
Object Oriented Testing(OOT) presentation slides
 
Software Engineering (Testing Overview)
Software Engineering (Testing Overview)Software Engineering (Testing Overview)
Software Engineering (Testing Overview)
 
Software testing-and-analysis
Software testing-and-analysisSoftware testing-and-analysis
Software testing-and-analysis
 
Objectorientedtesting 160320132146
Objectorientedtesting 160320132146Objectorientedtesting 160320132146
Objectorientedtesting 160320132146
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testing
 
Software testing methods, levels and types
Software testing methods, levels and typesSoftware testing methods, levels and types
Software testing methods, levels and types
 
Testing throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & ImplementationTesting throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & Implementation
 
Software testing
Software testingSoftware testing
Software testing
 
SDLCTesting
SDLCTestingSDLCTesting
SDLCTesting
 
SENG202-v-and-v-modeling_121810.pptx
SENG202-v-and-v-modeling_121810.pptxSENG202-v-and-v-modeling_121810.pptx
SENG202-v-and-v-modeling_121810.pptx
 
Software testing introduction
Software testing  introductionSoftware testing  introduction
Software testing introduction
 
testing strategies and tactics
 testing strategies and tactics testing strategies and tactics
testing strategies and tactics
 
Unit iv-testing-pune-university-sres-coe
Unit iv-testing-pune-university-sres-coeUnit iv-testing-pune-university-sres-coe
Unit iv-testing-pune-university-sres-coe
 
UNIT 1.pptx
UNIT 1.pptxUNIT 1.pptx
UNIT 1.pptx
 
Chapter 13 software testing strategies
Chapter 13 software testing strategiesChapter 13 software testing strategies
Chapter 13 software testing strategies
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 

6. oose testing

  • 1. Object-Oriented Software Engineering Chapter 6 Object Oriented Testing Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 2. An Overview of Testing • Testing is the process of finding differences between the expected behavior specified by system models and the observed behavior of the system • The goal of testing is to design tests that detects defects in the system and to reveal problems. • To test a system effectively, – a tester must have a detailed understanding of the whole system, ranging from the requirements to system design decisions and implementation issues. • A tester must also be knowledgeable of testing techniques and apply these techniques effectively and efficiently to meet time, budget, and quality constraints. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 3. An Overview of Testing • Testing is – a process of demonstrating that errors are not present” – a systematic attempt to find errors in a planned way • Software Testing answers the question “Does the s/w behave as specified…?” Testing is a process used to identify the correctness, completeness and quality of developed computer software. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 4. • One of the practical methods commonly used to detect the presence of errors (failures) in a computer program is to test it for a set of inputs. Our program The output is correct?I1, I2, I3, …, In, … Expected results = ? Obtained results “Inputs” Testing………….?Testing………….?
  • 5. Developer Independent Tester  Understands the system,  tests "gently”,  Test is driven by “delivery”.  learns about the system,  attempts to break the system  Test is driven by “quality”. Who Tests the Software….?Who Tests the Software….?
  • 6. Errors Requirements conformance Performance An indication of quality What Testing Shows…?What Testing Shows…?
  • 7. 7 • An early start to testing reduces the cost, time to rework and error free software that is delivered to the client. • However in Software Development Life Cycle (SDLC) testing can be started from the Requirements Gathering phase and lasts till the deployment of the software. • However it also depends on the development model that is being used. • For example in Water fall model formal testing is conducted in the Testing phase, • But in incremental model, testing is performed at the end of every increment/iteration and at the end the whole application is tested. When to start Testing …?When to start Testing …?
  • 8. 8 • Unlike when to start testing it is difficult to determine when to stop testing, as testing is a never ending process and no one can say that any software is 100% tested. • Following are the aspects which should be considered to stop the testing: • Testing Deadlines. • Management decision • Completion of test case execution. • Completion of Functional and code coverage to a certain point. • Bug rate falls below a certain level When to stop Testing …?When to stop Testing …?
  • 9. An Overview of Testing  Unit testing finds differences between the object design model and its corresponding component.  Structural testing finds differences between the system design model and a subset of integrated subsystems.  Functional testing finds differences between the use case model and the system.  Finally, performance testing finds differences between nonfunctional requirements and actual system performance. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 10. Software Reliability • Software reliability is the probability that a software system will not cause the failure of the system for a specified time under specified conditions [IEEE Std. 982-1989]. • The goal of testing is to maximize the number of discovered faults, and increase the reliability of the system. • The three classes of techniques for avoiding faults, detecting faults, and tolerating faults are 1. Fault Avoidance Techniques 2. Fault Detection Techniques 3. Fault Tolerance Techniques Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 11. Quality control Techniques 1. Fault Avoidance Techniques Fault avoidance tries to prevent the occurrence of errors and failures by finding faults in the system before it is released. • Fault avoidance techniques include: 1. • development methodologies 2. • configuration management 3. • verification techniques 4. • reviews of the system models, in particular the code model. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 12. Fault Avoidance Techniques Development methodologies avoid faults by providing techniques that minimize fault introduction in the system models and code. Such techniques include  the unambiguous representation of requirements, the use of data abstraction and data encapsulation,  minimizing of the coupling between subsystems maximizing of subsystem coherence,  the early definition of subsystem interfaces, and the capture of rationale information for maintenance activities. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 13. Fault Avoidance Techniques Configuration management avoids faults caused by undisciplined change in the system models. • it is a common mistake to a change a subsystem interface without notifying all developers of calling components. • Configuration management can make sure that, if analysis models and code are becoming inconsistent with one another, analysts and implementer are notified. Verification attempts to find faults before any execution of the system. • possible for small operating system kernel • It has limits. • Difficult to verify quality of large complex systems. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 14. Fault Avoidance Techniques • A review is the manual inspection of parts or all aspects of the system without actually executing the system. • There are two types of reviews: – walkthrough – inspection. • Walkthrough, the developer informally presents the API, the code, and associated documentation of the component to the review team. • The review team makes comments on the mapping of the analysis and object design to the code using use cases and scenarios from the analysis phase. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 15. Fault Avoidance Techniques • An inspection is similar to a walkthrough, but the presentation of the unit is formal. • In fact, in a code inspection, the developer is not allowed to present the artifacts (models, code, and documentation). • This is done by the review team, which is responsible for checking the interface and code of the component against the requirements. • It also checks the algorithms for efficiency with respect to the nonfunctional requirements. • Finally, it checks comments about the code and compares them with the code itself to find inaccurate and incomplete comments. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 16. Fault Avoidance Techniques Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 17. Fault detection techniques • Fault detection techniques, such as debugging and testing, are used during the development process to identify errors and find the underlying faults before releasing the system. • Debugging assumes that – faults can be found by starting from an unplanned failure. – developer moves the system through a succession of states, ultimately arriving at and identifying the erroneous state. There are two types of debugging: • The goal of correctness debugging is to find any deviation between the observed and specified functional requirements. • Performance debugging addresses the deviation between observed and specified nonfunctional requirements, such as response time. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 18. Fault detection techniques • Testing – fault detection technique that tries to create failures or errors in a planned way. – allows the developer to detect failures in the system before it is released to the customer. • Unit testing tries to find faults in participating objects and/or subsystems with respect to the use cases from the use case model • Integration testing is the activity of finding faults when testing the individually tested components together, for example, subsystems described in the subsystem decomposition, while executing the use cases and scenarios from the RAD. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 19. Testing Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 20. Fault detection techniques • System testing tests all the components together, seen as a single system to identify errors with respect to the scenarios from the problem statement and the requirements and design goals identified in the analysis and system design, respectively: • Functional testing tests the requirements from the RAD and, if available, from the user manual. • Performance testing checks the nonfunctional requirements and additional design goals from the SDD. Note that functional and performance testing are both done by developers. • Acceptance testing and installation testing check the requirements against the project agreement and should be done by the client, if necessary with support by the developers. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 22. Information needed at different Levels of Testing 22
  • 24. 24 • A unit is the smallest testable part of software. • In procedural programming a unit may be an individual program, function, procedure, etc. In OOP, the smallest unit is a method. – Unit testing is often neglected but it is, in fact, the most important level of testing. Unit Testing
  • 25. 25 METHOD • Unit Testing is performed by using the method White Box Testing When is it performed? • First level of testing & performed prior to Integration Testing Who performs it? • performed by software developers themselves or their peers. Unit Testing
  • 26. 26  Integration Testing is a level of the software testing process where individual units are combined and tested as a group. Integration testing tests interface between components, interaction to different parts of system. Integration Testing
  • 27. 27 Big Bang is an approach to Integration Testing where all or most of the units are combined together and tested at one go. This approach is taken when the testing team receives the entire software in a bundle. Top Down is an approach to Integration Testing where top level units are tested first and lower level units are tested step by step after that. This approach is taken when top down development approach is followed. Integration Testing Approaches
  • 28. 28 Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper level units step by step after that. This approach is taken when bottom up development approach is followed. Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches. Integration Testing Approaches
  • 30. 30 Functional Testing:  Goal: Test functionality of system  Test cases are designed from the requirements analysis document (better: user manual) and centered around requirements and key functions (use cases).The system is treated as black box  Unit test cases can be reused, but new test cases have to be developed as well. Performance Testing:  Goal: Try to violate non-functional requirements  Test how the system behaves when overloaded.  Try unusual orders of execution  Check the system’s response to large volumes of data  What is the amount of time spent in different use cases? System Testing
  • 31. Types of Performance Testing • Stress Testing – Stress limits of system • Volume testing – Test what happens if large amounts of data are handled • Configuration testing – Test the various software and hardware configurations • Compatibility test – Test backward compatibility with existing systems • Timing testing – Evaluate response times and time to perform a function • Security testing – Try to violate security requirements • Environmental test – Test tolerances for heat, humidity, motion • Quality testing – Test reliability, maintain- ability & availability • Recovery testing – Test system’s response to presence of errors or loss of data • Human factors testing – Test with end users.
  • 32. Acceptance Testing  Goal: Demonstrate system is ready for operational use  Choice of tests is made by client  Many tests can be taken from integration testing  Acceptance test is performed by the client, not by the developer.  Alpha test: Client uses the software at the developer’s environment. Software used in a controlled setting, with the developer always ready to fix bugs.  Beta test: Conducted at client’s environment (developer is not present) Software gets a realistic workout in target environment
  • 33. Fault tolerance techniques Fault tolerance techniques assume that a system can be released with errors and that system failures can be dealt with by recovering from them at run time. • Fault tolerance is the recovery from failure while the system is executing. • A component is a part of the system that can be isolated for testing. A component can be an object, a group of objects, or one or more subsystems. • A fault, also called bug or defect, is a design or coding mistake that may cause abnormal component behavior. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 34. Testing Concepts • An error is a manifestation of a fault during the execution of the system. • A failure is a deviation between the specification of a component and its behavior. A failure is triggered by one or more errors. • A test case is a set of inputs and expected results that exercises a component with the purpose of causing failures and detecting faults. • A test stub is a partial implementation of components on which the tested component depends. • A test driver is a partial implementation of a component that depends on the tested component. Test stubs and drivers enable components to be isolated from the rest of the system for testing. • •A correction is a change to a component. The purpose of a correction is to repair a fault. Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 35. Model elements used during Testing Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 36. 36 Errors An error is a mistake, misconception, or misunderstanding on the part of a software developer. Faults /Defects /Bugs • A fault (defect) is introduced into the software as the result of an error. • It is an anomaly in the software that may cause it to behave incorrectly, and not according to its specification. Failures • A failure is the inability of a software system or component to perform its required functions within specified performance requirements A fault in the code does not always produce a failure. Faults Failures & ErrorsFaults Failures & Errors
  • 37. 37 LOC Code 1 program double (); 2 var x,y: integer; 3 begin 4 read(x); 5 y := x * x; 6 write(y) 7 end Fault: The fault that causes the failure is in line 5. The * operator is used instead of +. Error: The error that conduces to this fault may be: • a typing error (the developer has written * instead of +) • a conceptual error (e.g., the developer doesn't know how to double a number) Failure: x = 3 means y =9 Failure! • This is a failure of the system since the correct output would be 6 Fault - Failure - Error - illustrationsFault - Failure - Error - illustrations
  • 38. Fault error failure Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill An Error due to mechanical cause or earth quake etc Example of Fault
  • 39. Testing Concepts Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill Test cases A test case is a set of input data and expected results that exercises a component with the purpose of causing failures and detecting faults. A test case has five attributes: 1. Name 2. Location 3. Input 4. oracle, and 5. log (Table 9-1). The name of the test case allows the tester to distinguish between different test cases.
  • 40. Testing Concepts Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 41. Testing Concepts Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill Test Cases: A test model where TestA must precede TestB and TestC., TestA consists of TestA1 and TestA2, meaning that once TestA1and TestA2 are tested, TestA is tested; there is not separate test for TestA. A good test model has as few associations as possible, because tests that are not associated with each other can be executed independently from each other. This allows a tester to speed up testing.
  • 42. Testing Concepts Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill Test Cases: Depending on which aspect of the system model is tested Test cases are classified into  black box tests and  white box tests,. Black box tests  focus on the input/output behavior of the component.  do not deal with the internal aspects of the component nor with the behavior or the structure of the components. White box tests  focus on the internal structure of the component.  makes sure that, independently from the particular input/output behavior
  • 43. Testing Concepts Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill Correction  is a change to a component whose purpose is to repair a fault.  can range from a simple modification to a single component to a complete redesign of a data structure or a subsystem Several techniques can be used to handle such faults: a. Problem tracking includes the documentation of each failure, error, and fault detected, its correction, and the revisions of the components involved in the change. b. Regression testing includes the re-execution of all prior tests after a change. Regression testing is important in object-oriented methods. Regression testing is costly,
  • 44. Testing Concepts Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill c. Rationale maintenance includes the documentation of the rationale of the change and its relationship with the rationale of the revised component. Rationale maintenance enables developers to avoid introducing new faults by inspecting the assumptions that were used to build the component.
  • 45. Documenting Testing Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill Testing activities are documented in four types of documents, 1. the Test Plan, 2. the Test Case Specifications, 3. the Test Incident Reports, and 4. the Test Summary Report Test Plan. The Test Plan focuses on the managerial aspects of testing. It documents the scope, approach, resources, and schedule of testing activities. The requirements and the components to be tested are identified in this document. The Test Plan (TP) and the Test Case Specifications (TCP) are written early in the process, as soon as the test planning and each test case are completed. These documents are under configuration management and updated as the system models change.
  • 46. Documenting Testing Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 47. Student Reading Assignment & Reference Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill
  • 48. 48 1. ‘‘If an input condition for the software-under-test is specified as a range of values, select one valid equivalence class that covers the allowed range and two invalid equivalence classes, one outside each end of the range.’’  For example, suppose the specification for a module says that an input, the length of a Wire in millimetres, lies in the range 1–499; then select  One valid equivalence class that includes all values from 1 to 499.  Two invalid classes  That consists of all values less than 1, and  That consists of all values greater than 499. Equivalence Partitioning
  • 49. 49 2. ‘‘If an input condition for the software-under-test is specified as a number of values, then select one valid equivalence class that includes the allowed number of values and two invalid equivalence classes that are outside each end of the allowed number.’’ For example, if the specification for a real estate-related module says that a house can have one to four owners, then we select – One valid equivalence class that includes all the valid number of owners – Two invalid equivalence classes • Less than one owner and • More than four owners. Equivalence Partitioning
  • 50. 50 3. ‘‘If an input condition for the software-under-test is specified as a set of valid input values, then select one valid equivalence class that contains all the members of the set and one invalid equivalence class for any value outside the set.’’ For example, if the specification for a paint module states that the colours RED, BLUE, GREEN and YELLOW are allowed as inputs, then select – One valid equivalence class that includes the set RED, BLUE, GREEN and YELLOW, and – One invalid equivalence class for all other inputs. Equivalence Partitioning
  • 51. 51 4. ‘‘If an input condition for the software-under-test is specified as a “must be” condition, select one valid equivalence class to represent the “must be” condition and one invalid class that does not include the “must be” condition.’’ For example, if the specification for a module states that the first character of a part identifier must be a letter, then select • One valid equivalence class where the first character is a letter and • One invalid class where the first character is not a letter. Equivalence Partitioning
  • 52. 52 Write Test Cases using Equivalence Partitioning for a requirement that is stated as follows: “In the examination grading system, if the student scores 0 to less than 40 then assign E Grade, if the student scores between 40 to 49 then assign D Grade, if the student scores between 50 to 69 then assign C Grade, if the student scores between 70 to 84 then assign B Grade, and if the student scores 85 to 100 then assign A Grade.” In the above problem definition, after analysis, we identify set of Output Values and corresponding set of input values producing same output. This analysis results in: Values from 0 to 39 produce E Grade Values from 40 to 49 produce D Grade Values from 50 to 69 produce C Grade Values from 70 to 84 produce B Grade Values from 85 to 100 produce A Grade Equivalence Partitioning - An Illustration
  • 53. 53 Based on EP, we identify following input values to test each boundary: For EP in range 0 to 39: EP for producing E Grade, input values are 0 to 39, Here Minimum Value= 0, Maximum Value = 39, Precision is 1. Thus, input values for testing for this EP for Grade E are: Minimum Value= 0 (Minimum Value+ precision)= 1 (Maximum Value- precision)= 38 Maximum Value= 39 Thus, input values for Boundary Values 0 and 39 are: 0, 1, 38, 39 Output Value is: Grade E Equivalence Partitioning - An Illustration
  • 54. 54 On the similar lines of analysis, we arrive at following input values for other EPs and corresponding outputs as: For 40 to 49, we have 40, 41, 48, 49 and Output Value is “Grade D” For 50 to 69, we have 50, 51, 68, 69 and Output Value is “Grade C” For 70 to 84, we have 70, 71, 83, 84 and Output Value is “Grade B” For 85 to 100, we have 85, 86, 99, 100 and Output Value is “Grade A” In Addition to this, we have EP with input values -1 and 101 and corresponding output value is “Error”. Thus for -1 and 101, we have Output Value as “Error” Equivalence Partitioning - An Illustration
  • 55. 55 For these boundaries, test cases based on EP technique are documented in Table. Test Case Design for a given example using EP Technique Equivalence Partitioning - An Illustration
  • 56. 56 Boundary Value Analysis (BVA) technique is a test case design technique that allows to identify appropriate number of input values to be supplied to test the system.  Human brain is evolved in emotional side and less evolved in logical side  Hence during software development, people always get confused in using < and <=; > and >= relational operators.  As such large number of errors tends to occur at boundaries of the input domain.  BVA leads to selection of test cases that exercise boundary values.  BVA based test case design helps to write test cases that exercise bounding values. Boundary Value Analysis
  • 57. 57 “Bugs lurk in corners and congregate at boundaries…” ---Boris Beizer The test cases developed based on equivalence class partitioning can be strengthened by use of a technique called boundary value analysis. Many defects occur directly on, and above and below, the edges of equivalence classes. BVA is a software design technique to determine test cases off-by-one errors Boundary Value Analysis
  • 58. 58 1. If an input condition for the software-under-test is specified as a range of values, develop valid test cases for the ends of the range, and invalid test cases for possibilities just above and below the ends of the range. For example if a specification states that an input value for a module must lie in the range between -1.0 and +1.0, Valid tests that include values -1.0, +1.0 for ends of the range, as well as Invalid test cases for values just above and below the ends -1.1, +1.1, should be included. Boundary Value Analysis
  • 59. 59 2. If an input condition for the software-under-test is specified as a number of values, develop valid test cases for the minimum and maximum numbers as well as invalid test cases that include one lesser than minimum and one greater than the maximum For example, for the real-estate module mentioned previously that specified a house can have four owners, Valid Tests that includes tests for minimum and maximum values 1, 5 owners Invalid tests that include one lesser than minimum and one greater than maximum 0, 6 are developed. Boundary Value Analysis
  • 60. 60 If the input or output of the software-under-test is an ordered set, such as a table or a linear list, develop tests that focus on the first and last elements of the set Example: Loan application Boundary Value Analysis
  • 61. 61 Using BVA- Range When the programming element is a range type, we can arrive at test cases using BVA as follows: For a range of values bounded by a and b, then test: (Minimum Value - precision) Minimum Value (Minimum Value + precision), (Maximum Value - precision) Maximum Value (Maximum Value + precision) Boundary Value Analysis
  • 62. 62 Write Test Cases using BVA for a requirement that is stated as follows: “In the examination grading system, if the student scores 0 to less than 40 then assign E Grade, if the student scores between 40 to 49 then assign D Grade, if the student scores between 50 to 69 then assign C Grade, if the student scores between 70 to 84 then assign B Grade, and if the student scores 85 to 100 then assign A Grade.” In the above problem definition, after analysis, we identify following Boundary Values: 0 to 39 40 to 49 50 to 69 70 to 84 85 to 100 BVA - An Illustration
  • 63. 63 Based on BVA, we identify following input values to test each boundary: For Boundary Values in range 0 to 39: For 0 to 39 boundary values, Minimum Value= 0, Maximum Value = 39, Precision is 1. Thus, input values for testing for this boundary values are: (Minimum Value-precision)= -1 Minimum Value= 0 (Minimum Value+ precision)= 1 (Maximum Value- precision)= 38 Maximum Value= 39 (Maximum Value + precision)= 40 Thus, input values for Boundary Values 0 and 39 are: -1, 0, 1, 38, 39, 40. On the similar lines of analysis, we arrive at following input values for other Boundary values: For 40 to 49, we have 39, 40, 41, 48, 49, 50 For 50 to 69, we have 49, 50, 51, 58, 59, 60 For 70 to 84, we have 69, 70, 71, 83, 84, 85 For 85 to 100, we have 84, 85, 86, 99, 100, 101 BVA - An Illustration
  • 64. 64 BVA – Test Case Design